Navigation: blog > 2020 > 03 > 18 > installing-jitsi-behind-a-reverse-proxy RSS

Installing Jitsi behind a reverse proxy

Jitsi logo


Update (April 2020): Since the first publication of this article, the “Jitsi configuration” section has been updated to reflect changes upstream. A “More about STUN servers section has been added as well.

Videoconferencing with the official instance has always been a pleasure, but it seemed a good idea to research how to install a Jitsi instance locally, that could be used by customers, by members of the local Linux Users Group (COAGUL), or by anyone else.

This instance is available at and that service should be considered as a beta: it’s just been installed, and it’s still running the stock configuration. Feel free to tell us what works for you and what doesn’t!

Networking vs. virtualization host

One host was already set up as a virtualization environment, featuring libvirt, managing LXC containers and QEMU/KVM virtual machines. In this article, we focus on IPv4 networking. Basically, the TCP/80 and TCP/443 TCP ports are exposed on the public IP, and NAT’d to one particular container, which acts as a reverse proxy. The running Apache server defines as many VirtualHosts as there are services, and acts as a reverse proxy for the appropriate LXC container or QEMU/KVM virtual machine.

Schematically, here’s what happens:

What does that mean for the Jitsi installation? Well, Jitsi expects those ports to be available:

For this specific host, TCP/4443 and UDP/10000 were available, and have been NAT’d as well to the Jitsi virtual machine directly. Given the existing services, the same couldn’t be done for the TCP/443 port, which explains the need for the following section.

NAT and reverse proxy for Jitsi (click for full view)
NAT and reverse proxy for Jitsi

Note: A summary of the host’s iptables configuration is available in the annex at the bottom of this article.

Apache as a reverse proxy

A new VirtualHost was defined on the apache2 service running as reverse proxy. The important parts are quoted below:

<VirtualHost *:80>
    RedirectMatch permanent ^(?!/\.well-known/acme-challenge/).*

<VirtualHost *:443>
    SSLProxyEngine on
    SSLProxyVerify none
    SSLProxyCheckPeerCN off
    SSLProxyCheckPeerName off
    SSLProxyCheckPeerExpire off

    ProxyPass        /
    ProxyPassReverse /

The redirections set up on the TCP/80 port were already mentioned in the previous section, so let’s concentrate on the TCP/443 port part.

The ProxyPass and ProxyPassReverse directives act on /, meaning every path will be proxied to the Jitsi virtual machine. If one wasn’t using VirtualHost directives to distinguish between services, one could be dedicating some specific paths (“subdirectories”) to Jitsi, and proxying only those to the Jitsi instance. But let’s concentrate on the simpler “the whole VirtualHost is proxied” case.

The first SSLProxyEngine on directive is needed for apache2 to be happy with proxying requests to a server using HTTPS, instead of plain HTTP.

All other SSLProxy* directives aren’t too nice as they disable all checks! Why do that, then? The answer is that Jitsi’s default installation is setting up an NGINX server with HTTP-to-HTTPS redirections, and it seemed easier to directly forward requests to the HTTPS port, disabling all checks since that NGINX server was installed with a self-signed certificate. One could deploy a suitable certificate there instead and enable the checks again, instead of using this “StackOverflow-style heavy hammer” (some directives might not even be needed).

Jitsi configuration

Jitsi itself was installed on a QEMU/KVM virtual machine, running a basic Debian 10 (buster) system, initially provisioned with 2 CPUs, 4 GB RAM, 3 GB virtual disk. Its IP address is, which is what was configured as the target of the ProxyPass* directives in the previous section.

The installation was done using the documentation, entering as the FQDN, and opting for a self-signed certificate (letting the reverse proxy in charge of the Let’s Encrypt certificate dance, like it does for all VirtualHosts).

Update (April 2020): Since late March 2020, upstream switched from videobridge to videobridge2. Another important change is that the jitsi-meet-turnserver package is pulled through jitsi-meet’s Recommends, as can be seen in APT metadata (wrapped for readability):

 jitsi-videobridge2 (= 2.1-157-g389b69ff-1),
 jicofo (= 1.0-539-1),
 jitsi-meet-web (= 1.0.3928-1),
 jitsi-meet-web-config (= 1.0.3928-1),
 jitsi-meet-prosody (= 1.0.3928-1)
 jitsi-meet-turnserver (= 1.0.3928-1) | apache2

TURN servers make it possible for clients to exchange streams in a peer to peer fashion when there are only two of them, by finding a way to traverse NATs. In the setup being documented here, the easiest is to not install the jitsi-meet-turnserver package (as documented recently in

Now, a very important point needs to be addressed (no pun intended), which isn’t so much related to the fact one is running behind a reverse proxy, but related to the fact TCP/4443 and UDP/10000 ports are NAT’d: the videobridge component needs to know about that, and needs to know about the public IP and the local IP. In this context, the local IP is the Jitsi virtual machine’s local IP, where the NAT for TCP/4443 and UDP/10000 points to, and it is not the reverse proxy’s local IP. That’s why those lines have to be added to the /etc/jitsi/videobridge/ configuration file:

[ Hint: Beware, there’s another configuration file, for the jicofo component! ]

Additionally, a default setting needs to be commented out (in the same file), because the TURN server isn’t installed:

Remember to restart the service:

systemctl restart jitsi-videobridge2

Update (April 2020): Until late March 2020, this systemd server unit used to be called jitsi-videobridge instead.

More about STUN servers

A privacy-concerned user was kind enough to inform a number of Jitsi instance administrators (including us) that the default Jitsi configuration uses Google’s STUN servers. This was fixed through a recent pull request: config: use Jitsi's STUN servers by default, instead of Google's.

Without waiting for a new upstream release, administrators can tweak their local configuration (in /etc/jitsi/meet/F.Q.D.N-config.js). This can be checked client-side by running tcpdump and checking packets are seen when a 2-participant conversation is set up:

tcpdump host

For completeness: Jitsi’s own infrastructure relies on Amazon Web Services at the moment.

Annex: host networking configuration

The relevant iptables rules on the host are the following (leaving aside the usual MASQUERADING which is required when using NAT):

Chain FORWARD (filter table)
target     prot opt source               destination
ACCEPT     tcp  --        tcp dpt:80
ACCEPT     tcp  --        tcp dpt:443
ACCEPT     tcp  --        tcp dpt:4443
ACCEPT     udp  --        udp dpt:10000

Chain PREROUTING (nat table)
target     prot opt source               destination
DNAT       tcp  --          tcp dpt:80 to:
DNAT       tcp  --          tcp dpt:443 to:
DNAT       tcp  --          tcp dpt:4443 to:
DNAT       udp  --          udp dpt:10000 to:

Published: Wed, 18 Mar 2020 10:15:00 +0100
Last modified: Thu, 02 Apr 2020 03:30:00 +0200

Previous article: Fixing faulty synchronization in Nextcloud
Sometimes a client sync in Nextcloud goes haywire, here’s a little workaround.
Next article: systemd: RequiredBy versus WantedBy
How using the wrong keyword in systemd metadata can lead to unexpected results…