Caddy is a modern server that is deployable for modern web services. With the increasing complexity and style of deployment of many web services, examining the behavior of the entire system quickly becomes very complex, and gains from any updates, optimizations or load shedding become very hard to qualify or examine.
To fill this void, Caddy rightfully so deserves modern observability.
Modern observability on the web server means that any operations after a request hits the web server can be observed, measured and examined irrespective of the destination of the request, whatever centric serving it performs or the big picture system behavior.
Distributed Tracing and Monitoring is the mechanism by which we can gain insights into the behavior of a distributed system.
Tracing gives us timing and territorial information about the progression of a request. Using context propagation on the transport e.g HTTP, we can send over information between remote services and after completion examine the propagations on various backends, without any vendor lock-in or any single cloud lock-in.
With monitoring/metric collection, we can collect any quantifiable metrics such as: * Client and Server latency * Memory statistics * Runtime behavior
An added advantage of vendor agnostic distributed tracing is that we can then export these traces and metrics simultaneously to a plethora of backends that your site reliability engineers and generally other developers can examine on such as: * Instana * Prometheus/Grafana * AWS X-Ray * Zipkin * Jaeger * DataDog * Stackdriver Monitoring and Tracing and many more
However, the distributed tracing and monitoring framework should provide very low latency and optionality so that users of the server will not incur expensive overhead. Also the addition of observability the web server should not add a maintainence burden for your teams, the project maintainers either nor should they require sophisticated and specialized distributed systems and observability knowledge, nor should it require sophisicated infrastructure deployments.
caddy -observability "<sampler_rate>;exporter1:[config1Key=config1Value:config2Key=config2Value...][,]exporter2...]"
You can enable observability by checking out the instrumented branch of caddy
go get github.com/mholt/caddy/caddy cd $GOPATH/src/github.com/mholt/caddy git add orijtech firstname.lastname@example.org:orijtech/caddy.git && git fetch orijtech && git checkout instrument-with-opencensus && go get ./caddy
observability := SamplerRate;Exporters SamplerRate := float64 value Exporters := [ExporterConfig](,ExporterConfig)? ExporterConfig := Name:[Key-ValuePairs]* Name := collection of symbolic names for exporters Key-Value Pair := KeyToken=ValueToken KeyToken := string ValueToken := string
|aws-xray||AWS_REGION||String||The region that your project is located in||
|aws-xray||AWS_ACCESS_KEY_ID||Your access key||
|aws-xray||AWS_SECRET_ACCESS_KEY||Your access key||
|jaeger||agent||The URL of the Jaeger||
|jaeger||collector||The URL of the Jaeger||
|jaeger||service-name||The service name when inspected by Jaeger||
|prometheus||port||The port that will be scraped from your
|stackdriver||GOOGLE_APPLICATION_CREDENTIALS||File path for value||The credentials for your Google Cloud Platform project||
|stackdriver||monitoring||boolean for value||A commandline option to toggle monitoring||
|stackdriver||tracing||boolean for value||A commandline option to toggle tracing||
|stackdriver||project-id||string for value||A commandline option||
|zipkin||local||URL||The URL of the local endpoint||
|zipkin||reporter||URL||The URL of the reporter endpoint||
|zipkin||service-name||string||The name of your service||
- Comprehensive example running with all the environment variables too
GOOGLE_APPLICATION_CREDENTIALS=./creds.json AWS_REGION=us-west-2 AWS_ACCESS_KEY_ID=keyId AWS_SECRET_ACCESS_KEY=secretKey \ caddy -observability "0.9;zipkin,prometheus:port=8999,aws-xray,stackdriver:tracing=true:monitoring=true:project-id=census-demos,jaeger:agent=localhost:6831,service-name=search-endpoint"