Prometheus exporter: go get -u -v contrib.go.opencensus.io/exporter/prometheus
Brief Overview
By the end of this tutorial, we will do these four things to obtain metrics using OpenCensus:
Create quantifiable metrics (numerical) that we will record
Create tags that we will associate with our metrics
Organize our metrics, similar to writing a report, in to a View
Export our views to a backend (Prometheus in this case)
Getting Started
Unsure how to write and execute Go code? Click here.
We will be a simple “read-evaluate-print-loop” (REPL) app. In there we’ll collect some metrics to observe the work that is going on within this code, such as:
Latency per processing loop
Number of lines read
Number of errors
Line lengths
First, create a file called repl.go.
touch repl.go
Next, put the following code inside of repl.go:
packagemainimport("bufio""bytes""fmt""io""log""os")funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{returnerr}out,err:=processLine(line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(in[]byte)(out[]byte,errerror){returnbytes.ToUpper(in),nil}
You can run the code via go run repl.go.
Enable Metrics
Import Packages
To enable metrics, we’ll import a couple of packages:
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/tag")funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{returnerr}out,err:=processLine(line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(in[]byte)(out[]byte,errerror){returnbytes.ToUpper(in),nil}
Create Metrics
First, we will create the variables needed to later record our metrics.
var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{returnerr}out,err:=processLine(line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(in[]byte)(out[]byte,errerror){returnbytes.ToUpper(in),nil}
Create Tags
Now we will create the variable later needed to add extra text meta-data to our metrics.
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{returnerr}out,err:=processLine(line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(in[]byte)(out[]byte,errerror){returnbytes.ToUpper(in),nil}
We will later use this tag, called KeyMethod, to record what method is being invoked. In our scenario, we will only use it to record that “repl” is calling our data.
Again, this is arbitrary and purely up the user. For example, if we wanted to track what operating system a user is using, we could do so like this:
osKey,_:=tag.NewKey("operating_system")
Later, when we use osKey, we will be given an opportunity to enter values such as “windows” or “mac”.
Inserting Tags
Now we will insert a specific tag called “repl”. It will give us a new context.Context ctx which we will use while we later record our metrics. This ctx has all tags that have previously been inserted, thus allowing for context propagation.
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{returnerr}out,err:=processLine(line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(in[]byte)(out[]byte,errerror){returnbytes.ToUpper(in),nil}
When recording metrics, we will need the ctx from tag.New. We will be recording metrics in processLine, so let’s go ahead and make ctx available now.
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{returnerr}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){returnbytes.ToUpper(in),nil}
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){startTime:=time.Now()ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}deferfunc(){ifterr!=nil{ctx,_=tag.New(ctx,tag.Upsert(KeyStatus,"ERROR"),tag.Upsert(KeyError,terr.Error()))}stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)))}()fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{iferr!=io.EOF{returnerr}log.Fatal(err)}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){startTime:=time.Now()deferfunc(){stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)),MLineLengths.M(int64(len(in))))}()returnbytes.ToUpper(in),nil}funcsinceInMilliseconds(startTimetime.Time)float64{returnfloat64(time.Since(startTime).Nanoseconds())/1e6}
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){startTime:=time.Now()ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}deferfunc(){ifterr!=nil{ctx,_=tag.New(ctx,tag.Upsert(KeyStatus,"ERROR"),tag.Upsert(KeyError,terr.Error()))}stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)))}()fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{iferr!=io.EOF{returnerr}log.Fatal(err)}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){startTime:=time.Now()deferfunc(){stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)),MLineLengths.M(int64(len(in))))}()returnbytes.ToUpper(in),nil}funcsinceInMilliseconds(startTimetime.Time)float64{returnfloat64(time.Since(startTime).Nanoseconds())/1e6}
Enable Views
We will be adding the View package: "go.opencensus.io/stats/view"
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/stats/view""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){startTime:=time.Now()ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}deferfunc(){ifterr!=nil{ctx,_=tag.New(ctx,tag.Upsert(KeyStatus,"ERROR"),tag.Upsert(KeyError,terr.Error()))}stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)))}()fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{iferr!=io.EOF{returnerr}log.Fatal(err)}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){startTime:=time.Now()deferfunc(){stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)),MLineLengths.M(int64(len(in))))}()returnbytes.ToUpper(in),nil}funcsinceInMilliseconds(startTimetime.Time)float64{returnfloat64(time.Since(startTime).Nanoseconds())/1e6}
Create Views
We now determine how our metrics will be organized by creating Views.
var(LatencyView=&view.View{Name:"demo/latency",Measure:MLatencyMs,Description:"The distribution of the latencies",// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
Aggregation:view.Distribution(0,25,50,75,100,200,400,600,800,1000,2000,4000,6000),TagKeys:[]tag.Key{KeyMethod}}LineCountView=&view.View{Name:"demo/lines_in",Measure:MLineLengths,Description:"The number of lines from standard input",Aggregation:view.Count(),}LineLengthView=&view.View{Name:"demo/line_lengths",Description:"Groups the lengths of keys in buckets",Measure:MLineLengths,// Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
Aggregation:view.Distribution(0,5,10,15,20,40,60,80,100,200,400,600,800,1000),})
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/stats/view""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))var(LatencyView=&view.View{Name:"demo/latency",Measure:MLatencyMs,Description:"The distribution of the latencies",// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
Aggregation:view.Distribution(0,25,50,75,100,200,400,600,800,1000,2000,4000,6000),TagKeys:[]tag.Key{KeyMethod}}LineCountView=&view.View{Name:"demo/lines_in",Measure:MLineLengths,Description:"The number of lines from standard input",Aggregation:view.Count(),}LineLengthView=&view.View{Name:"demo/line_lengths",Description:"Groups the lengths of keys in buckets",Measure:MLineLengths,// Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
Aggregation:view.Distribution(0,5,10,15,20,40,60,80,100,200,400,600,800,1000),})funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){startTime:=time.Now()ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}deferfunc(){ifterr!=nil{ctx,_=tag.New(ctx,tag.Upsert(KeyStatus,"ERROR"),tag.Upsert(KeyError,terr.Error()))}stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)))}()fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{iferr!=io.EOF{returnerr}log.Fatal(err)}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){startTime:=time.Now()deferfunc(){stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)),MLineLengths.M(int64(len(in))))}()returnbytes.ToUpper(in),nil}funcsinceInMilliseconds(startTimetime.Time)float64{returnfloat64(time.Since(startTime).Nanoseconds())/1e6}
Register Views
We now register the views and set the reporting period.
funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// Register the views
iferr:=view.Register(LatencyView,LineCountView,LineLengthView);err!=nil{log.Fatalf("Failed to register views: %v",err)}// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}
packagemainimport("bufio""bytes""context""fmt""io""log""os""time""go.opencensus.io/stats""go.opencensus.io/stats/view""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))var(LatencyView=&view.View{Name:"demo/latency",Measure:MLatencyMs,Description:"The distribution of the latencies",// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
Aggregation:view.Distribution(0,25,50,75,100,200,400,600,800,1000,2000,4000,6000),TagKeys:[]tag.Key{KeyMethod}}LineCountView=&view.View{Name:"demo/lines_in",Measure:MLineLengths,Description:"The number of lines from standard input",Aggregation:view.Count(),}LineLengthView=&view.View{Name:"demo/line_lengths",Description:"Groups the lengths of keys in buckets",Measure:MLineLengths,// Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
Aggregation:view.Distribution(0,5,10,15,20,40,60,80,100,200,400,600,800,1000),})funcmain(){// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// Register the views
iferr:=view.Register(LatencyView,LineCountView,LineLengthView);err!=nil{log.Fatalf("Failed to register views: %v",err)}// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}deferfunc(){ifterr!=nil{ctx,_=tag.New(ctx,tag.Upsert(KeyStatus,"ERROR"),tag.Upsert(KeyError,terr.Error()))}stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)))}()fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{iferr!=io.EOF{returnerr}log.Fatal(err)}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){startTime:=time.Now()deferfunc(){stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)),MLineLengths.M(int64(len(in))))}()returnbytes.ToUpper(in),nil}funcsinceInMilliseconds(startTimetime.Time)float64{returnfloat64(time.Since(startTime).Nanoseconds())/1e6}
Exporting stats
Register the views
// Register the views
iferr:=view.Register(LatencyView,LineCountView,LineLengthView);err!=nil{log.Fatalf("Failed to register views: %v",err)}
Import Packages
We will be adding the Prometheus Go exporter package package: "contrib.go.opencensus.io/exporter/prometheus"
Create the exporter
In order for our metrics to be exported to Prometheus, our application needs to be exposed as a scrape endpoint.
The OpenCensus Go Prometheus exporter is an http.Handler that MUST be attached
to http endpoint “/metrics”.
import("log""net/http""contrib.go.opencensus.io/exporter/prometheus""go.opencensus.io/stats/view")funcmain(){pe,err:=prometheus.NewExporter(prometheus.Options{Namespace:"ocmetricstutorial",})iferr!=nil{log.Fatalf("Failed to create the Prometheus stats exporter: %v",err)}// Now finally run the Prometheus exporter as a scrape endpoint.
// We'll run the server on port 8888.
gofunc(){mux:=http.NewServeMux()mux.Handle("/metrics",pe)iferr:=http.ListenAndServe(":8888",mux);err!=nil{log.Fatalf("Failed to run Prometheus scrape endpoint: %v",err)}}()}
End to end code
Collectively the code will be
packagemainimport("bufio""bytes""context""fmt""io""log""net/http""os""time""contrib.go.opencensus.io/exporter/prometheus""go.opencensus.io/stats""go.opencensus.io/stats/view""go.opencensus.io/tag")var(// The latency in milliseconds
MLatencyMs=stats.Float64("repl/latency","The latency in milliseconds per REPL loop","ms")// Counts/groups the lengths of lines read in.
MLineLengths=stats.Int64("repl/line_lengths","The distribution of line lengths","By"))var(KeyMethod,_=tag.NewKey("method")KeyStatus,_=tag.NewKey("status")KeyError,_=tag.NewKey("error"))var(LatencyView=&view.View{Name:"demo/latency",Measure:MLatencyMs,Description:"The distribution of the latencies",// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
Aggregation:view.Distribution(0,25,50,75,100,200,400,600,800,1000,2000,4000,6000),TagKeys:[]tag.Key{KeyMethod}}LineCountView=&view.View{Name:"demo/lines_in",Measure:MLineLengths,Description:"The number of lines from standard input",Aggregation:view.Count(),}LineLengthView=&view.View{Name:"demo/line_lengths",Description:"Groups the lengths of keys in buckets",Measure:MLineLengths,// Lengths: [>=0B, >=5B, >=10B, >=15B, >=20B, >=40B, >=60B, >=80, >=100B, >=200B, >=400, >=600, >=800, >=1000]
Aggregation:view.Distribution(0,5,10,15,20,40,60,80,100,200,400,600,800,1000),})funcmain(){// Register the views, it is imperative that this step exists
// lest recorded metrics will be dropped and never exported.
iferr:=view.Register(LatencyView,LineCountView,LineLengthView);err!=nil{log.Fatalf("Failed to register the views: %v",err)}// Create the Prometheus exporter.
pe,err:=prometheus.NewExporter(prometheus.Options{Namespace:"ocmetricstutorial",})iferr!=nil{log.Fatalf("Failed to create the Prometheus stats exporter: %v",err)}// Now finally run the Prometheus exporter as a scrape endpoint.
// We'll run the server on port 8888.
gofunc(){mux:=http.NewServeMux()mux.Handle("/metrics",pe)iferr:=http.ListenAndServe(":8888",mux);err!=nil{log.Fatalf("Failed to run Prometheus scrape endpoint: %v",err)}}()// In a REPL:
// 1. Read input
// 2. process input
br:=bufio.NewReader(os.Stdin)// Register the views
iferr:=view.Register(LatencyView,LineCountView,LineLengthView);err!=nil{log.Fatalf("Failed to register views: %v",err)}// repl is the read, evaluate, print, loop
for{iferr:=readEvaluateProcess(br);err!=nil{iferr==io.EOF{return}log.Fatal(err)}}}// readEvaluateProcess reads a line from the input reader and
// then processes it. It returns an error if any was encountered.
funcreadEvaluateProcess(br*bufio.Reader)(terrerror){startTime:=time.Now()ctx,err:=tag.New(context.Background(),tag.Insert(KeyMethod,"repl"),tag.Insert(KeyStatus,"OK"))iferr!=nil{returnerr}deferfunc(){ifterr!=nil{ctx,_=tag.New(ctx,tag.Upsert(KeyStatus,"ERROR"),tag.Upsert(KeyError,terr.Error()))}stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)))}()fmt.Printf("> ")line,_,err:=br.ReadLine()iferr!=nil{iferr!=io.EOF{returnerr}log.Fatal(err)}out,err:=processLine(ctx,line)iferr!=nil{returnerr}fmt.Printf("< %s\n\n",out)returnnil}// processLine takes in a line of text and
// transforms it. Currently it just capitalizes it.
funcprocessLine(ctxcontext.Context,in[]byte)(out[]byte,errerror){startTime:=time.Now()deferfunc(){stats.Record(ctx,MLatencyMs.M(sinceInMilliseconds(startTime)),MLineLengths.M(int64(len(in))))}()returnbytes.ToUpper(in),nil}funcsinceInMilliseconds(startTimetime.Time)float64{returnfloat64(time.Since(startTime).Nanoseconds())/1e6}
Running the tutorial
This step involves running the tutorial application in one terminal and then Prometheus itself in another terminal.
Having properly installed go, in one terminal, please run
go run repl.go
Prometheus configuration file
To enable Prometheus to scrape from your application, we have to point it towards the tutorial application whose
server is running on “localhost:8888”.
To do this, we firstly need to create a YAML file with the configuration e.g. promconfig.yaml
whose contents are: