Please take the OpenCensus user survey



Memcached is one of the most used server caching and scaling technologies.

It was created by Brad Fitzpatrick in 2003 as a solution to scale his social media product Live Journal


Trace updates

OpenCensus Go was used to instrument Brad’s original Memcache client, with 2 major changes:

all now take in a context.Context object as the first argument, to allow for trace propagation.

Available metrics

Metric Name Description
Number of cache Misses cache_misses The number of cache misses
Number of cache hits cache_hits The number of cache hits
Number of errors errors The number of general errors, disambiguated by tags “method”, “reason”, “type”
Number of compare and swap conflicts cas_conflicts The number of CAS conflicts
Number of unstored results unstored_results The number of unstored results
Distribution of key lengths key_length The distributions and counts of key lengths in Bytes
Distribution of value lengths value_length The distributions and counts of value lengths in Bytes
Distribution of latencies in milliseconds latency The distributions and counts of latencies in milliseconds, by tag “method”
Number of calls calls The number of calls broken down by tag key method

Using it

go get -u -v

Enabling OpenCensus

To provide observability we’ll enable OpenCensus tracing and metrics

Enabling Metrics

package main

import (



func main() {
        if err := view.Register(memcache.AllViews...); err != nil {
                log.Fatalf("Failed to register Memcache views: %v", err)

Enabling Tracing

You’ll just to enable any of the trace exporter in Go exporters

End to end example

For assistance installing Memcached, please visit the Memcached Installation wiki

With Memcached now installed and running, we can now start the code sample and for simplicitly examining metrics, we’ll use Stackdriver Monitoring and Tracing

For assistance setting up Stackdriver, please visit this Stackdriver setup guided codelab

Our sample is an application excerpt from a distributed prime factorization engine that needs to calculate square roots of big numbers but would like to reuse expensively calculated results since square roots of such numbers are CPU intensive. To share/memoize results amongst our distributed applications, we’ll use Memcache. In the first round, before a cache hit, we’ll notice that the latency is high, but on cache hit the latency decreases dramatically.

package main

import (



func main() {
	flushFn, err := enableOpenCensusTracingAndMetrics()
	if err != nil {
		log.Fatalf("Failed to enable OpenCensus: %v", err)
	defer func() {
		// Wait for 60 seconds before exiting to allow metrics to be flushed
		log.Println("Waiting for ~60s to allow metrics to be exported")
		<-time.After(62 * time.Second)

	mc := memcache.New("localhost:11211")

	cst := httptest.NewServer(&ochttp.Handler{
		Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
			qv := r.URL.Query()
			query := qv.Get("v")
			ctx := r.Context()

			// Check Memcached if we've computed it before
			memoizedSQRT, err := mc.Get(ctx, query)
			if memoizedSQRT != nil && len(memoizedSQRT.Value) > 0 && err == nil {

			// Now compute the expensive operation
			in, _, err := big.ParseFloat(query, 0, 1000, big.ToNearestEven)
			if err != nil {
				http.Error(w, err.Error(), http.StatusBadRequest)

			sqrt := big.NewFloat(0).Sqrt(in)
			out, _ := sqrt.MarshalText()

                        // Pause for 3 milliseconds as a throttle to "avoid CPU saturation".
                        <-time.After(3 * time.Millisecond)
			// Lastly, memoize it for a cache hit next time
			_ = mc.Set(ctx, &memcache.Item{Key: query, Value: out})
	defer cst.Close()

	values := []string{
	hc := &http.Client{Transport: &ochttp.Transport{}}
	for _, value := range values {
		log.Printf("In %s\n", value)
		ctx, span := trace.StartSpan(context.Background(), "CalculateSquareRoot-"+value)
		for i := 0; i < 2; i++ {
			startTime := time.Now()
			cctx, sspan := trace.StartSpan(ctx, fmt.Sprintf("Round-%d", i+1))
			req, _ := http.NewRequest("GET", cst.URL+"?v="+value, nil)
			req = req.WithContext(cctx)
			res, err := hc.Do(req)
			if err != nil {
				log.Printf("i=#%d, value=%q err: %v", i, value, err)
			sqrtBlob, _ := ioutil.ReadAll(res.Body)
			_ = res.Body.Close()
			log.Printf("Round #%d\nSQRT %q\nTimeSpent: %s\n\n", i, sqrtBlob, time.Since(startTime))
		// For clean up, we'll try to remove all the "values" so that the results
		// can be repeatable to demonstrate the cache misses and cache hits.
		_ = mc.Delete(ctx, value)

func enableOpenCensusTracingAndMetrics() (func(), error) {
	sd, err := stackdriver.NewExporter(stackdriver.Options{
		MetricPrefix: "sqrtapp",
		ProjectID:    "census-demos", // Please change this as needed
	if err != nil {
		return nil, err

	// Enable tracing: for demo purposes, we'll always trace
	trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})

	// Register as a tracing exporter

	// Register as a metrics exporter
	view.SetReportingPeriod(60 * time.Second)
	if err := view.Register(memcache.AllViews...); err != nil {
		return nil, err
	if err := view.Register(ochttp.DefaultServerViews...); err != nil {
		return nil, err
	if err := view.Register(ochttp.DefaultClientViews...); err != nil {
		return nil, err
	return sd.Flush, nil

Examining your traces

Please visit

Opening our console will produce something like


Examining your metrics

Please visit


Resource URL
Memcached clients instrumented with OpenCensus in Go and Python Medium article