OpenCensus and OpenTracing have merged into OpenTelemetry!



Memcached is one of the most used server caching and scaling technologies.

It was created by Brad Fitzpatrick in 2003 as a solution to scale his social media product Live Journal


Trace updates

OpenCensus Go was used to instrument Brad’s original Memcache client, with 2 major changes:

all now take in a context.Context object as the first argument, to allow for trace propagation.

Available metrics

Metric Name Description Tags
Distribution of key lengths “gomemcache/key_length” The distributions and counts of key lengths in Bytes “method”
Distribution of value lengths “gomemcache/value_length” The distributions and counts of value lengths in Bytes “method”
Distribution of latencies in milliseconds “gomemcache/latency” The distributions and counts of the various methods roundtrip latencies in milliseconds “method”, “error”, “status”
Number of calls “gomemcache/calls” The number of calls of the various methods “method”, “error”, “status”

Using it

go get -u -v

Enabling OpenCensus

To provide observability we’ll enable OpenCensus tracing and metrics

Enabling Metrics

package main

import (



func main() {
        if err := view.Register(memcache.AllViews...); err != nil {
                log.Fatalf("Failed to register Memcache views: %v", err)

Enabling Tracing

You’ll just to enable any of the trace exporter in Go exporters

End to end example

For assistance installing Memcached, please visit the Memcached Installation wiki

With Memcached now installed and running, we can now start the code sample and for simplicity examining metrics, we’ll use Stackdriver Monitoring and Tracing

For assistance setting up Stackdriver, please visit this Stackdriver setup guided codelab

Our sample is an application excerpt from a distributed prime factorization engine that needs to calculate square roots of big numbers but would like to reuse expensively calculated results since square roots of such numbers are CPU intensive. To share/memoize results amongst our distributed applications, we’ll use Memcache. In the first round, before a cache hit, we’ll notice that the latency is high, but on cache hit the latency decreases dramatically.

package main

import (



func main() {
	flushFn, err := enableOpenCensusTracingAndMetrics()
	if err != nil {
		log.Fatalf("Failed to enable OpenCensus: %v", err)
	defer func() {
		// Wait for 60 seconds before exiting to allow metrics to be flushed
		log.Println("Waiting for ~60s to allow metrics to be exported")
		time.Sleep(62 * time.Second)

	mc := memcache.New("localhost:11211")

	cst := httptest.NewServer(&ochttp.Handler{
		Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
			qv := r.URL.Query()
			query := qv.Get("v")

			ctx := r.Context()
			// Check Memcached if we've computed it before
			memoizedSQRT, err := mc.Get(ctx, query)
			if memoizedSQRT != nil && len(memoizedSQRT.Value) > 0 && err == nil {

			// Now compute the expensive operation
			in, _, err := big.ParseFloat(query, 0, 1000, big.ToNearestEven)
			if err != nil {
				http.Error(w, err.Error(), http.StatusBadRequest)

			sqrt := big.NewFloat(0).Sqrt(in)
			out, _ := sqrt.MarshalText()

			// Pause for 3 milliseconds as a throttle to "avoid CPU saturation".
			time.Sleep(3 * time.Millisecond)
			// Lastly, memoize it for a cache hit next time
			_ = mc.Set(ctx, &memcache.Item{Key: query, Value: out})
	defer cst.Close()

	values := []string{
	hc := &http.Client{Transport: &ochttp.Transport{}}
	for _, value := range values {
		log.Printf("In %s\n", value)
		ctx, span := trace.StartSpan(context.Background(), "CalculateSquareRoot-"+value)
		for i := 0; i < 2; i++ {
			startTime := time.Now()
			cctx, sspan := trace.StartSpan(ctx, fmt.Sprintf("Round-%d", i+1))
			req, _ := http.NewRequest("GET", cst.URL+"?v="+value, nil)
			req = req.WithContext(cctx)
			res, err := hc.Do(req)
			if err != nil {
				log.Printf("i=#%d, value=%q err: %v", i, value, err)
			sqrtBlob, _ := ioutil.ReadAll(res.Body)
			_ = res.Body.Close()
			log.Printf("Round #%d\nSQRT %q\nTimeSpent: %s\n\n", i, sqrtBlob, time.Since(startTime))
		// For clean up, we'll try to remove all the "values" so that the results
		// can be repeatable to demonstrate the cache misses and cache hits.
		_ = mc.Delete(ctx, value)

func enableOpenCensusTracingAndMetrics() (func(), error) {
	sd, err := stackdriver.NewExporter(stackdriver.Options{
		// Please change these as needed.
		MetricPrefix: "demosqrtcache",
		ProjectID:    "census-demos",
	if err != nil {
		return nil, err

	// Enable tracing: for demo purposes, we'll always trace.
	trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})

	// Register as a tracing exporter.

	// Start metrics exporter.

	// Most importantly register all the views for GoMemcache.
	if err := view.Register(memcache.AllViews...); err != nil {
		return nil, err
	return sd.Flush, nil

Examining your traces

Please visit

Opening our console will produce something like


Examining your metrics

Please visit


Resource URL
Memcached clients instrumented with OpenCensus in Go and Python Medium article