Friday 22 January 2016 — This is over seven years old. Be careful.
Let’s say I have a piece of software. In this case, it’s some automation for installing and upgrading Open edX. I want to know how it is being used, for example, how many people in the last month used certain versions or switches.
To collect information like that, I can put together a URL in the program, and ping that URL. What’s a good simple way to collect that information? What server or service is easy to use and can help me look at the data? Is this something I should use classic marketing web analytics for? Is there a more developer-centric service out there?
This is one of those things that seems easy enough to just do with bit.ly, or a dead-stupid web server with access logs, but I’m guessing there are better ways I don’t yet know about.
But if you want to automate the processing of that data to produce a report of some kind, you could have a second task that runs on a regular basis, reads from the first bucket, processes it, and writes the report to a second bucket.
Perhaps you could stray into using structured data stores instead of buckets if you need something fancier.
* Google Analytics
* Heap Analytics
> a dead-stupid web server with access logs
don't ever underestimate the power of dead-stupid! :)
Our infrastructure was exactly as you describe. But the killer feature was forwarding the access logs to a Splunk indexer. Our access log data then became searchable in near real-time. Splunk is an awesome tool for slicing and dicing data like this. With the profiling data beaconed back, we were able to gather a rich set of metrics from real users. We used Splunk to segment and analyze the data very quickly and produce reports.
NGINX + Splunk worked great for this task, and it was trivial to configure.
Add a comment: