-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this still maintained? #296
Comments
Sadly I think it’s dead. Doesn’t work for me at all using a single local machine. |
Hi @Sinderella If you need an alternative until this is fixed, I managed to get the other Redis Infrastructure v0.0.3, it comes with the Redis Dashboard for Prometheus Redis Exporter 1.x and once I fixed a mistake I made, it works fine and connects immediately. I found out it does work with with multiple instances/pools on the same host, I had my list of instances in my Prometheus folder named as a yml file instead of json, my bad. I had installed the full Redis Exporter plugin and it works great but maybe the built-in one in Grafana Agent would have been sufficient. The Dashboard is a bit basic but is neat and tidy. |
That's neat! I tried to use that one earlier but it didn't work for me. Just the "Redis" one worked for me, but still a bit buggy. I guess I'll have to do more debugging. Thanks! |
I'm trying to use as datasource for TS.MRANGE command. Seems very buggy and not deterministic. Running the same query shows me reasonable graph every 5th request. Does anyone know of a Grafana datasource plugin for redis time series data? |
@NikolaBorisov TS.MRANGE works fine for me for a narrow range. It's a bit slow when the range is too wide, sometimes it became frozen. I don't think there's an alternative. |
What version of Grafana do you have? |
I've been on the latest branch of version 8 the last few months, just switched to 9.5.1, so issue so far. |
FYI @Sinderella - my team is taking over maintenance of this plugin - we'll be adding a few new features, but there's also of course a bunch of issues that have piled up in the backlog that we'll be taking a look at and addressing. @NikolaBorisov @Sinderella RE |
Good to hear @slorello89. We are using latest version of redis stack. We have time series data in redis about machine learning model usage time. When I try to make a grafana dashboard with this data, the dashboard is empty. Every 5th refresh some data shows up. I have grafana 9.3.1. I'm also not sure how to use the "value label" filed? |
Hi @NikolaBorisov - interesting, still trying to work out how everything works - it looks like (at least in my configuration) the TS.MRANGE command will send a query for a range over the last hour - you can verify this by looking at what query it sends: I'm still working out where that bit of configuration comes from (I'm not setting it). But that's what's telling it what range to pull from Redis. I'd be particularly interested to see what that interval is for those dead instances - for what it's worth whenever I run it it seems to work just fine. I don't suppose you can monitor redis while you're running those? |
I don't know how frequently these time series are being added to - but I wonder if this is just due to the enormity of what you're requesting here - you are requesting 12 hours of data (43,200,000 milliseconds) for 10 timeseries - so even if you were only populating those time-series once a second (I don't know the frequency of their updates - but once a second would be what we consider lightly used) you are looking at 432,000 unique records the TS.MRANGE has to analyze - TS.MRANGE's time complexity is based off the number of records it's ranging over. I'd be curious to see what the slowlog looks like after one of these You might want to consider not leaning into the aggregation in the MRANGE itself and rather using compaction rules within your timeseries so it doesn't have to to do all those calculations. If you set your rules up correctly you wouldn't need the aggregation at all and you'd have a guaranteed 7200 records (60 minutes * 12 hours * 10 time-series) that you'd have to range over - which should be quite reasonable. |
The amount of data is not a problem here. I have not written a lot of data in this series and I ask for the aggregation of 1min. The redis command completes in <100ms. The bugs are in the grafana redis plugin.
|
Hi @NikolaBorisov - what does your setup look like - I cannot get this to happen on Grafana 9.3.1 - but I have everything running locally so may not be representative of what you are experiencing. Each time I run the |
I'm running Grafana and redis in k8s. I did some more digging and it looks like sometimes the grafana server returns large set of results(400+) with maybe all the data and sometimes it returns smaller set of result (121) |
Interesting @NikolaBorisov - so you are expecting to have 400+ time-series in your result set? Sorry only saw 10 in the graph you sent before. I tried this with 400 time series - all with the same label So I expect what's happening is that either the backend plugin or the front-end is getting hung by the enormous amount of data it's getting. Considering the javascript is literally hanging to the extent that it's giving me the "do you want to kill this page" messages - I suspect it might be the frontend but not sure At least I can reproduce it now - so I can take a look. |
@NikolaBorisov - in each of those frames how large are the arrays of values JSON path $.results.A.frames[0].data.values[0].length (check on something like https://jsonpath.com/) (assuming you are using query |
My time series are very sparse most of them had no data points during this time range (60%). a few will have 720 points. |
@NikolaBorisov - so if you are using Is your deployment of Redis a cluster? If so are you using Redis Cloud/Redis Enterprise? If yes - you are in a cluster, and no - you are not using Redis Enterprise - that could be the problem. It looks like the Effectively this means the shard it would go to for the |
I have just one instance of redis stack not in a cluster.
…On Wed, Jun 14, 2023, 8:53 AM Steve Lorello ***@***.***> wrote:
@NikolaBorisov <https://github.com/NikolaBorisov> - so if you are using fill
values - that would fill the values of all the time-series present in the
result set (might only be between the min and max time in the series
result-set). I just noticed something in the code that might explain your
issue?
Is your deployment of Redis a cluster? If so are you using Redis
Cloud/Redis Enterprise?
If *yes* - you are in a cluster, and *no* - you are not using Redis
Enterprise - that could be the problem.
It looks like the from field is being fed to the client as a key. If it's
a cluster deployment, the key is hashed to determine which instance the
client should go to to service it's request.
Effectively this means the shard it would go to for the TS.MRANGE would
be random - in a Enterprise/cloud deployment that's actually ideal because
the DMC proxy will fan the TS.MRANGE out to all the shards in the cluster
- bringing you back a complete result set. However, if you are *not*
using RE/Cloud - it will *not* fan the TS.MRANGE out, limiting your
result set to what's on the shard it happens to go to. I think that would
more or less match the behavior you're seeing - if you are monitoring a
Redis instance and you had 5 master shards only 1/5th of the commands would
reach your shards, and your result set would change more or less on every
request given the from changes each time you reissue the request.
—
Reply to this email directly, view it on GitHub
<#296 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABNAAXF7UNE7GQHT22XY7LXLG64LANCNFSM6AAAAAAXOILVT4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@NikolaBorisov - I am going to close this issue as the fundamental question of the issue has been answered, split out your issue into #303 |
I see that there's no new commits for more than a year now. I've been using it and it's working well even for grafana 9.
However, seeing that there's not much activity going on, I feel like I may need to start looking for an alternative if something's not working. I am not sure why this is the case, both redis and grafana are popular, but somehow they're not often used together?
The text was updated successfully, but these errors were encountered: