I am trying to use the RTSPtoWeb application (https://github.com/deepch/RTSPtoWeb) to view my security camera’s RTSP stream on a website hosted on my home server using WebRTC. I would like to be able to hear the audio recorded by the security camera, in addition to viewing the video.
I started by adapting the code from https://github.com/deepch/RTSPtoWeb/blob/master/docs/examples/webrtc. I have now reached a point where I can successfully view my camera’s video, but the volume button on the video is greyed out, which implies that the audio track is not being received via WebRTC.
Here are the contents of my index.html file:
<!doctype html>
<html>
<head>
<title>Camera Monitor</title>
</head>
<body>
<script>
// Create video
const video = document.createElement('video')
video.id = 'camera_video'
video.style = 'max-width:100%; max-height:100%'
video.muted = true
video.controls = true
// Add video to grid
document.body.appendChild(video)
// WebRTC script adapted from docs/examples/webrtc/main.js in RTSPtoWeb Github repository
// https://github.com/deepch/RTSPtoWeb/blob/master/docs/examples/webrtc/main.js
document.addEventListener('DOMContentLoaded', () => {
function startPlay (video, url) {
// Create a new WebRTC peer connection, add a transceiver, and create a data channel
const connection = new RTCPeerConnection({ sdpSemantics: 'unified-plan' })
connection.addTransceiver('video', { direction: 'recvonly' })
const channel = connection.createDataChannel('RTSPtoWeb')
// When a track is received
connection.ontrack = e => {
console.log('Received ' + e.streams.length + ' track(s)')
video.srcObject = e.streams[0]
video.play()
}
// When the session negotiation process is to be started
connection.onnegotiationneeded = async () => {
const offer = await connection.createOffer()
await connection.setLocalDescription(offer)
fetch(url, {
method: 'POST',
body: new URLSearchParams({ data: btoa(connection.localDescription.sdp) })
})
.then(response => response.text())
.then(data => {
try {
connection.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: atob(data) })
)
} catch (e) {
console.warn(e)
}
})
}
// When the data channel is opened, log a message
channel.onopen = () => {
console.log(`${channel.label} data channel opened`)
}
// When the data channel is closed, log a message and recursively call startPlay() again
channel.onclose = () => {
console.log(`${channel.label} data channel closed`)
startPlay(video, url)
}
// When the data channel receives a message, log it
channel.onmessage = e => console.log(e.data)
}
const video = document.querySelector('#camera_video')
const url = 'http://<ip_address>:8083/stream/camera/channel/0/webrtc'
startPlay(video, url)
})
</script>
</body>
</html>
One of the things I tried to troubleshoot the issues is to change 'video'
to 'audio'
in the line connection.addTransceiver('video', { direction: 'recvonly' })
. Although by doing this, the video is disabled, I am able to successfully hear the audio recorded by the security camera. So I am confident that there are no issues with the camera’s mic and audio encoding it is using.
When I add two lines (one for audio and the other for video) to my code like below, only the audio plays. If I reverse the order of the two lines, only video plays.
connection.addTransceiver('video', { direction: 'recvonly' })
connection.addTransceiver('audio', { direction: 'recvonly' })
There appears to be another related question on the forum, but it remains unanswered: Webrtc is not receiving video and audio at the same time
Any help will be much appreciated. Sorry if this sounds like a stupid question, but I’m very new to WebRTC, and have only been reading up online to try to find a solution to this problem over the last few days.