Liquidsoap managing 800 simultaneous live video streams #1770
Unanswered
hailfinger
asked this question in
Q&A
Replies: 4 comments
-
Hey! We should be able to start testing this now. Are you still available? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Yes! Tagging @mander1000 because we're both working on that project. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Okay! How should be proceed next? Can we help building a test script? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Moving this to discussions. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is your feature request related to a problem? Please describe.
Is it feasible to use Liquidsoap for 800 simultaneous live video streams? This seems to be a little larger than the deployments mentioned in the FOSDEM presentation and in the documentation.
The 800 video+audio streams will be sent to the server via SRT or RTMP (SRT preferred) from Raspberry Pi computers doing H.264+AAC encoding of a video+audio source. One source / one stream per Raspberry Pi. These sources may have somewhat unreliable internet connections (unstable DSL, LTE, WiFi) with bandwidth limitations and packet loss. Each source delivers up to 1500 kbit/s total for video+audio. The resolution and frame rate of the video from the sources is either 720p25 or deinterlaced PAL at 25 fps. Resolution and frame rate are constant for a given source.
SRT/RTMP should use authentication features to make sure third parties can not inject unauthorized content.
Statistics from SRT (especially packet loss) should be dumped continuously per stream to allow monitoring and/or possibly telling the source to adjust its bitrate to a lower value.
SRT features FEC and ARQ should be usable to reduce packet loss.
For a small number (max. 10) of high-priority sources, a dual uplink strategy should send two SRT streams to the same liquidsoap instance in case one fails. Failover should be seamless in this case.
For viewers, three different HLS streams will be available per source
For pure listeners, an additional stream (audio only) will be available per source
Viewers should be able to switch between different quality levels/bitrates seamlessly from the HLS video viewer. The video viewer might be one of video.js or hls.js depending on how well they work.
Average number of viewers is 25 per stream. The fanout will be done via reverse HTTP(S) proxies (essentially a homegrown CDN) and seems to be out of scope for liquidsoap.
Live streams usually last between 1.5 and 2.5 hours, and in a high-load case usually all 800 of them happen roughly at the same time, although they won't exactly start/end in the same second. In a low-load case there will be maybe 10 simultaneous streams. There is significant idle time, and there may be entire days without any stream running. If possible, it would be nice to scale down the resource requirements during idle and low-load time (i.e. stop any possible cloud fanout).
It seems that such a deployment would be theoretically possible, but I'm unsure of resource limitations and the availability of parallelized processing. The FOSDEM slides suggest one liquidsoap per stream, but for 800 streams this may require a lot of RAM, and it is unclear how multiple liquidsoap instances would be able to listen on the same UDP port for SRT.
After all, the total video input bandwidth is 1.2 Gbit/s. Obviously not all the transcoding can happen on a single server, so some sources will essentially have to be passed through to a separate transcoding cluster while keeping timestamps intact and streams in sync.
Describe the solution you'd like
Something similar to the problem statement.
Describe alternatives you've considered
Using RTMP instead of SRT for all sources and using nginx-rtmp to handle transcoding repackaging to HLS. We have a small-scale prototype with nginx-rtmp that works, but we'd rather ingest streams with SRT. Besides that, liquidsoap seems to be more suited for a larger scale deployment where not all transcoding can happen on the same server.
AWS can do pretty much most of what we want at a price point we can't pay.
Additional context
#1377 mentions ffmpeg copy mode, which is essential to pass through the original streams without quality loss or additional CPU load from transcoding
#1262 native RTMP input would be interesting for a small number of sources which may be incapable of speaking SRT, but we could as well just put a Raspberry Pi translating RTMP to SRT in front of the source
#1267 HLS output might be relevant, although the FOSDEM slides suggest this already works
#1186 SRT fallback recovery might be relevant to high-priority sources where two SRT streams arrive in parallel and each of them may intermittently die
I'm doing this as a volunteer for a nonprofit, so the funds are scarce and the plan consists of renting servers at some hosting provider combined with on-demand cloud scaleout. Location for the hosting is EU (France or Germany) for GDPR reasons and because most viewers are in the EU.
Beta Was this translation helpful? Give feedback.
All reactions