At KubeCon Europe a very good chunk of booths were observability stacks. Everyone was claiming they're better than the competitors (with some of the just justifying themselves by saying "it's written in Rust).
Having dealt with Prometheus (+Thanos) / Grafana / OTEL and other stacks (e.g: custom solution on ClickHouse, Victoria{Metrics,Logs}, Jaeger/Tempo, Loki, ...) and even cloud ones (Google's Monarch rebranded as Prometheus)... what's your selling point? This to me seems like yet another way to re-invent the wheel.
If it's just for running locally, okay, fine, but when it comes to production (where the stack really matters) at scale, you end up with lots of tradeoffs and approaches.
Why is this one a winning one compared to the overwhelming "competition"? Seems like we're re-inventing the wheel for the 100th time instead of focusing on unifying the efforts in making the existing solutions better. Thankfully we now have OTEL, so at least the interoperability part is somewhat solved (or mitigated)
yuppiepuppie 10 hours ago [-]
I was thinking this might be a result of the Cheap-money (post covid) era ending and everyone scrambling to reduce their Datadog/Cloud costs. Thinking back on 2023/2024, lots of companies were leaking large amounts of capital to those vendors and I imagine lots of people saw an opportunity for creating leaner and cheaper stacks.
robertlagrant 9 hours ago [-]
This is my instinct too. I've had the pleasure of using DataDog and the pain of negotiating with their salespeople!
Ocha 7 hours ago [-]
Yes. Their sales people don’t even negotiate - they just tell you this is price and done. Dunno why they need sales person if prices are non-negotiable
gnyadav 7 hours ago [-]
[dead]
parliament32 3 hours ago [-]
FWIW we've also tried all sorts of different things, and honestly the very vanilla (prometheus -> central thanos, fluentbit -> central loki, grafana) ends up on top. The resource consumption is surprisingly minimal (for a sense of scale, we run about 200k eps for metrics and 1k eps for logs). For all these solutions, I find myself asking the same question as you.. what problem are you trying to solve? Is there anything actually different about your product other than less stability than the battle-tested stack?
NortySpock 6 hours ago [-]
If I can ask a separate question: what scalability problems did you run into with Victoria{Metrics|Logs|Traces}, and at what scale did you hit them?
VictoriaMetrics and Logs have worked fine in my quiet homelab, and VictoriaMetrics appeared to work great for the infrastructure team of an open source online video game I contribute to (say about 10 physical nodes and 20 applications/services ) ... I was going to suggest VictoriaLogs to them next but wanted to ask what roadblocks could come up.
yard2010 10 hours ago [-]
I have tried to self host grafana (loki prom and alloy) as o11y stack for prepbook.app. This is hard. I have a bsc in cs not that it says something. I managed to do it eventually, after some research. It was not plug and play in any way. The docs kept saying this solution is not production ready even. I couldn't find the production guide, only the "forget about self hosting and simply pay for us hosting this". After I deployed it the UX was so abrasive my partner won't even try to go into it to figure out a problem. It was a few months ago. Since then new solutions have arrived and I'm waiting to have the time to migrate. I saw PostHog have a solution but I prefer something I could self host and completely own.
I thought how come no one is trying to solve this problem. It looks like it's just a matter of time.
With that being said, my experience can be very skewed since prepbook is a passion project running on a VPS with essentially 0 scale. All I care about is the UX of the stack, not scale. Just for context.
embedding-shape 10 hours ago [-]
FWIW, I have no CS degree and barely attended school at all, and found Grafana + Prometheus + Loki fairly easy to setup, at least compared to what we used to use before those tools were available. Maybe it's because I used NixOS for the setup, but besides learning some new domain-specific things I didn't know since before, I don't recall hitting any particular bumps or roadblocks, I also went the 100% self-hosted route (spread across two hosts at home).
What exactly were you struggling with when it came to the setup? Just a ton of new concepts to learn which took time, or something specific to Grafana/Prometheus/Loki?
dijit 9 hours ago [-]
"Getting it running" is the easy part.
"Getting it ready for production" is a different game.
I've fallen on my sword many times by trying to explain that prometheus fails every metric of production ready; in fact Google themselves replaced borgmon (prometheus) for Monarch because the "tiny unreliable time series databases everywhere" was in fact, not the successful and reliable deployment strategy that they had claimed.
But, it is very easy to set up. Just don't go looking for failure modes, because they're everywhere and every single one of them is catastrophic.
denysvitali 8 hours ago [-]
There are ways to scale Prometheus (look at Thanos), but none of the solutions is really bug free.
See this PR for example (https://github.com/prometheus/prometheus/pull/18364) - this used to impact a production deployment I worked on. Prometheus, Thanos and even OpenTelemetry are full of those kind of problems - but at the same time it's the best we have and we should be grateful they're free and open source.
I'd still choose an open source stack (and contribute to it) rather than go for a proprietary solution - we've all seen what happens with DataDog & co.
Please don't take my words lightly, I worked with the rest of my team in a large scale observability platform and scalability should not be underestimated - at the same time DataDog / Splunk prices are simply unjustified. It's ironically cheaper to build a team of engineers that will maintain a sane observability stack instead of feeding the monster(s).
otterley 8 hours ago [-]
> It's ironically cheaper to build a team of engineers that will maintain a sane observability stack instead of feeding the monster(s).
Can you show the math here? This is a very bold claim, and I’m super curious. A shared Google Sheet would work well.
embedding-shape 8 hours ago [-]
Well, I am running the stack in production right now, but everyone has a different understanding of what that actually means...
Do you have concrete examples of these catastrophic failures? I've personally havent experienced any myself during these years, but I'm doing very boring and typical stuff, so wouldn't surprise me there was hard edges still.
dijit 8 hours ago [-]
There's a difficult distinction here, you're right.
Technically even a single server running LAMP as root but taking frontend traffic meets the definition of in production but I think we all recognise that it's not the right idea.
What I'm referring to is: should the disk start to have issues: what does prometheus do? If the scrapers start to stall due to connection timeouts: what does prometheus do? If you are doing linear interpolation of data and you have massive gaps because you're polling opportunisitically: what does prometheus do.
I'm all about boring technology, but prometheus assumes too much happy path. It assumes that a single node is enough for time series data that is used for alerting.
Which, it is: at very small scale and with best effort reliability.
It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or to infer issues that need to be correlated across multiple systems.
jmalicki 2 hours ago [-]
What is the disk? You've already failed by not running distributed. The problem isn't Prometheus, it's "the cloud is too expensive I'll just run on a single VPS"
dijit 2 hours ago [-]
Prometheus does not run a distributed tsdb.
jmalicki 1 hours ago [-]
Oh right, I forgot that even existed, I am used to seeing it as a layer that gets your data to a distributed tsdb
embedding-shape 8 hours ago [-]
> should the disk start to have issues
If that happens, is prometheus really the biggest of your worries here? Software breaks left and right when disks disappear from under them, I'm not sure this is neither unexpected or unique to prometheus.
> If the scrapers start to stall due to connection timeouts: what does prometheus do?
I'm having this "issue" all the time, as some of my WiFi connected (less important) cameras are just within the WiFi range, and I'm using prometheus to scrape metrics from them. It seems like the requests times out, then the next time it doesn't, and everything just works? What's the issue you're experiencing with this exactly?
> It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or
Wait what? Billing systems? That stuff would go into your proper database, wouldn't it? Sure, if prometheus/node_exporter fails or whatever, you won't get metrics out of the host, but again, if those things start failing on that host, the host is having bigger issues than "prometheus suck at scale".
I was eagerly awaiting to be educated about potential gaps in my understanding of prometheus, instead it seems like you simply don't happen to like they way they do things? I was under the impression they did something wrong or something was broken, but these things just seems like the typical stuff you have to think about for any service you deploy.
dijit 8 hours ago [-]
Yes, my monitoring system not alerting me when the systems it runs on are failing is the entire problem.
That's not a general "software breaks when disks fail" situation: that's a monitoring system failing at its one job.
Your monitoring system failing silently when your infrastructure is under stress is precisely the failure mode that monitoring exists to prevent.
Zabbix solves this with native HA and self-checks. Prometheus makes it your problem to solve with external tooling, and most people don't, until they need it.
embedding-shape 8 hours ago [-]
Why wouldn't your monitoring system alert you when metrics suddenly disappear? Sounds like you need a better monitoring system, prometheus is not gonna magically solve that problem for you. No wonder you were having issues with prometheus...
dijit 7 hours ago [-]
I'm not sure what you mean.
Of course the systems that have to alert me to failure have to be designed with mechanisms to alert me to the fact that they themselves are failing.
Prometheus doesn't because it optimised intentionally for being easy to deploy and for there being a hierarchy of prometheus's in a tree-like formation. Which makes sense, but forces a much more distributed and difficult to reason model.
Monitoring systems can't be designed for the happy path. By definition, they only matter when things are going wrong- which is precisely when the happy path isn't available. Prometheus is excellent when everything is fine (scaling aside). That's not when you need your monitoring system to be excellent.
embedding-shape 7 hours ago [-]
I think we're running really different monitoring setups, I'd never expect my alerting solution to still be able to alert to me if it's down or degraded, nor would I expect my metrics gathering software to alert me if it's down, that's why I have monitoring setup for those things in the first place.
But, I'm sure your setup makes as much sense in your context as mine makes in my context. As long as it works for you, we're all happy :)
dijit 6 hours ago [-]
"I have monitoring set up for those things" - but that doesn't solve the ambiguity. When Prometheus misses a scrape, nothing fires. Silence looks identical whether your service is down, the network blipped, or Prometheus itself is struggling. A defensive monitoring system has to treat absence of data as a signal, not just absence of a problem.
ting0 9 hours ago [-]
Do you think Prometheus + Grafana is the way to go?
denysvitali 9 hours ago [-]
Really depends on the use case. Home lab? Probably.
Production? As soon as you scale you need a proper solution. Prometheus (by itself) doesn't scale - you need Mimir or Thanos (or similar).
Clickhouse (the "clickstack") seems to be the new kid on the block. Looks very promising.
parliament32 3 hours ago [-]
Note Clickhouse is quite old (2010ish?) but they've always been a "web server access log analytics" solution. The pivot to "we do observability too" is new, we'll see how that plays out. Not terribly optimistic given how badly a similar pivot went for Elastic, but who knows.
NortySpock 50 minutes ago [-]
I thought observability was shoved on Clickhouse by other stacks deciding to use Clickhouse as their recommended database for observability (SigNoz springs to mind but they were not the only one)
CyberDildonics 8 hours ago [-]
Is "observability stack" the new term for logs and stats?
ddux1389 2 hours ago [-]
Hey everyone, I'm the original creator of this project. Just saw this thread, I'll do my best to respond to everyone.
tecoholic 19 hours ago [-]
I was looking into this just yesterday. So the Loki + … comparison is a bit off in the Open Source space. The main ones are Signoz and ClickStack in this space. Both using ClickHouse as the database. Heavy compared to something like Loki, but they are OTEL native and not log monitoring. So not in the same category.
jillesvangurp 15 hours ago [-]
I used Signoz + Clickstack on a vibe coded Go server project a few weeks ago. I just made codex figure out how to setup signoz + dependencies via docker compose. I even got it to pre-populate signoz with dashboards. It wasn't too bad. The whole thing runs with a few GB. I tried to cover metrics, tracing, and logging at the same time. This is not a production ready setup but you need to trade off cost vs. utility here. If it's useful enough, that could justify extra cost.
I have a background in having done a lot of stuff on the Elastic stack related to this; including setting up a big Elastic Fleet based stack for one client at some point. It might not be the cheapest, but it does provide awesome filtering and querying capabilities. However, a lot of teams that use it don't really know how to tap into that capability so it tends to be overengineered for what it does in the end. And the extra, underutilized complexity is why a lot of teams are wary of dealing with that stack.
Storing the data is the easy part but what's the point if you can't run queries against it and produce dashboards and diagnostic tools that actually help you? Prometheus/grafana or older graphite type setups tend to be compromises where you get lots of data but are then limited on the querying front or the number of metrics. The tradeoff is always between scale and querying flexibility. If you store tens/hundreds of GB of telemetry per day, you need a way to make sense of it. Clickhouse seems to be quite good at scaling and querying. It's basically a column database. I don't have direct experience with Loki.
But in the end, all that power only matters if people actually use it. And, again, in my experience teams tend not to. They tend to have a lot of unrealized aspirations around their tools and infrastructure. If it's just a dumping ground for data + a few simplistic dashboards, optimize for that. A lot of that data is actually only kept for compliance/auditing reasons. For that, querying is usually a secondary concern and it's OK if queries take a bit longer and are less powerful.
tecoholic 11 hours ago [-]
I agree. The sentiment applies to most analytics. People who setup analytics are not the same as end users.
adenta 19 hours ago [-]
I'm partial to open observe, especially because in Ruby the OTEL stuff isn't great for metrics and logs yet.
lytedev 19 hours ago [-]
I also run open observe at home, but I can't help but feel that the interface could use some... sparkle, and the mobile experience kinda sucks.
But you can't beat the excellent price and performance. Does what I need and much more
I'm the main contributor to Traceway, I LOVE Elixir! Traceway is strictly for monitoring your app, not the actual usage/product analytics. It's for making sure you know how well your backend is performing and to be able to quickly fix issues that show up.
sexylinux 11 hours ago [-]
Why is it better? On the internet it is not enough to just say something. You need to deliver some facts and / or a comparison. Please try it.
etiam 8 hours ago [-]
Do you have any proof of that?
ting0 12 hours ago [-]
This looks cool
ddux1389 2 hours ago [-]
Thank you
ArslanS1997 5 hours ago [-]
This is awesome bro
ddux1389 2 hours ago [-]
Not the OP, but I am the one making Traceway, thank you
RGJorge 7 hours ago [-]
The "easy to set up" framing usually skips the hardest part: whether the metric you're alerting on is meaningful. Most stacks pull container memory from cAdvisor's `container_memory_usage_bytes`, which is the
same broken `memory_stats.usage` that `docker stats` reports — includes the kernel's reclaimable page cache. For DB containers with hot working sets, that metric stays at 95%+ constantly. Beautiful Grafana
dashboards alerting on a structurally wrong number. The fix is computing real anonymous memory (subtract active_file + inactive_file) — most stacks leave that as a custom exporter exercise. Curious how Traceway handles this out of the box.
Unfortunately my account is being rate limited and I can't response to each comment.
Thank you for your support the attention project has received has been unreal.
I'll be responding to everyone as the rate limit subsides but I've made this in the meantime: https://github.com/tracewayapp/traceway/blob/main/HN.md
Again, thank you for your support!
Having dealt with Prometheus (+Thanos) / Grafana / OTEL and other stacks (e.g: custom solution on ClickHouse, Victoria{Metrics,Logs}, Jaeger/Tempo, Loki, ...) and even cloud ones (Google's Monarch rebranded as Prometheus)... what's your selling point? This to me seems like yet another way to re-invent the wheel.
If it's just for running locally, okay, fine, but when it comes to production (where the stack really matters) at scale, you end up with lots of tradeoffs and approaches.
Why is this one a winning one compared to the overwhelming "competition"? Seems like we're re-inventing the wheel for the 100th time instead of focusing on unifying the efforts in making the existing solutions better. Thankfully we now have OTEL, so at least the interoperability part is somewhat solved (or mitigated)
VictoriaMetrics and Logs have worked fine in my quiet homelab, and VictoriaMetrics appeared to work great for the infrastructure team of an open source online video game I contribute to (say about 10 physical nodes and 20 applications/services ) ... I was going to suggest VictoriaLogs to them next but wanted to ask what roadblocks could come up.
I thought how come no one is trying to solve this problem. It looks like it's just a matter of time.
With that being said, my experience can be very skewed since prepbook is a passion project running on a VPS with essentially 0 scale. All I care about is the UX of the stack, not scale. Just for context.
What exactly were you struggling with when it came to the setup? Just a ton of new concepts to learn which took time, or something specific to Grafana/Prometheus/Loki?
"Getting it ready for production" is a different game.
I've fallen on my sword many times by trying to explain that prometheus fails every metric of production ready; in fact Google themselves replaced borgmon (prometheus) for Monarch because the "tiny unreliable time series databases everywhere" was in fact, not the successful and reliable deployment strategy that they had claimed.
But, it is very easy to set up. Just don't go looking for failure modes, because they're everywhere and every single one of them is catastrophic.
See this PR for example (https://github.com/prometheus/prometheus/pull/18364) - this used to impact a production deployment I worked on. Prometheus, Thanos and even OpenTelemetry are full of those kind of problems - but at the same time it's the best we have and we should be grateful they're free and open source.
I'd still choose an open source stack (and contribute to it) rather than go for a proprietary solution - we've all seen what happens with DataDog & co.
Please don't take my words lightly, I worked with the rest of my team in a large scale observability platform and scalability should not be underestimated - at the same time DataDog / Splunk prices are simply unjustified. It's ironically cheaper to build a team of engineers that will maintain a sane observability stack instead of feeding the monster(s).
Can you show the math here? This is a very bold claim, and I’m super curious. A shared Google Sheet would work well.
Do you have concrete examples of these catastrophic failures? I've personally havent experienced any myself during these years, but I'm doing very boring and typical stuff, so wouldn't surprise me there was hard edges still.
Technically even a single server running LAMP as root but taking frontend traffic meets the definition of in production but I think we all recognise that it's not the right idea.
What I'm referring to is: should the disk start to have issues: what does prometheus do? If the scrapers start to stall due to connection timeouts: what does prometheus do? If you are doing linear interpolation of data and you have massive gaps because you're polling opportunisitically: what does prometheus do.
I'm all about boring technology, but prometheus assumes too much happy path. It assumes that a single node is enough for time series data that is used for alerting.
Which, it is: at very small scale and with best effort reliability.
It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or to infer issues that need to be correlated across multiple systems.
If that happens, is prometheus really the biggest of your worries here? Software breaks left and right when disks disappear from under them, I'm not sure this is neither unexpected or unique to prometheus.
> If the scrapers start to stall due to connection timeouts: what does prometheus do?
I'm having this "issue" all the time, as some of my WiFi connected (less important) cameras are just within the WiFi range, and I'm using prometheus to scrape metrics from them. It seems like the requests times out, then the next time it doesn't, and everything just works? What's the issue you're experiencing with this exactly?
> It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or
Wait what? Billing systems? That stuff would go into your proper database, wouldn't it? Sure, if prometheus/node_exporter fails or whatever, you won't get metrics out of the host, but again, if those things start failing on that host, the host is having bigger issues than "prometheus suck at scale".
I was eagerly awaiting to be educated about potential gaps in my understanding of prometheus, instead it seems like you simply don't happen to like they way they do things? I was under the impression they did something wrong or something was broken, but these things just seems like the typical stuff you have to think about for any service you deploy.
That's not a general "software breaks when disks fail" situation: that's a monitoring system failing at its one job.
Your monitoring system failing silently when your infrastructure is under stress is precisely the failure mode that monitoring exists to prevent.
Zabbix solves this with native HA and self-checks. Prometheus makes it your problem to solve with external tooling, and most people don't, until they need it.
Of course the systems that have to alert me to failure have to be designed with mechanisms to alert me to the fact that they themselves are failing.
Zabbix, Nagios, Munin -- practically everything that existed before: understood this.
Prometheus doesn't because it optimised intentionally for being easy to deploy and for there being a hierarchy of prometheus's in a tree-like formation. Which makes sense, but forces a much more distributed and difficult to reason model.
Monitoring systems can't be designed for the happy path. By definition, they only matter when things are going wrong- which is precisely when the happy path isn't available. Prometheus is excellent when everything is fine (scaling aside). That's not when you need your monitoring system to be excellent.
But, I'm sure your setup makes as much sense in your context as mine makes in my context. As long as it works for you, we're all happy :)
Production? As soon as you scale you need a proper solution. Prometheus (by itself) doesn't scale - you need Mimir or Thanos (or similar).
Clickhouse (the "clickstack") seems to be the new kid on the block. Looks very promising.
I have a background in having done a lot of stuff on the Elastic stack related to this; including setting up a big Elastic Fleet based stack for one client at some point. It might not be the cheapest, but it does provide awesome filtering and querying capabilities. However, a lot of teams that use it don't really know how to tap into that capability so it tends to be overengineered for what it does in the end. And the extra, underutilized complexity is why a lot of teams are wary of dealing with that stack.
Storing the data is the easy part but what's the point if you can't run queries against it and produce dashboards and diagnostic tools that actually help you? Prometheus/grafana or older graphite type setups tend to be compromises where you get lots of data but are then limited on the querying front or the number of metrics. The tradeoff is always between scale and querying flexibility. If you store tens/hundreds of GB of telemetry per day, you need a way to make sense of it. Clickhouse seems to be quite good at scaling and querying. It's basically a column database. I don't have direct experience with Loki.
But in the end, all that power only matters if people actually use it. And, again, in my experience teams tend not to. They tend to have a lot of unrealized aspirations around their tools and infrastructure. If it's just a dumping ground for data + a few simplistic dashboards, optimize for that. A lot of that data is actually only kept for compliance/auditing reasons. For that, querying is usually a secondary concern and it's OK if queries take a bit longer and are less powerful.
But you can't beat the excellent price and performance. Does what I need and much more
- ClickStack (ex HyperDX) - SigNoz - Traceway - a few more
does someone has enough feedback on those to be able to tell which one works best?
https://github.com/plausible/analytics
Elixir.