Website Performace

Hi Guys, I really like what you are doing and have build a monitoring station, however the thing that I find so frustrating in using the satnogs website is that is performance is horrendous.

Just loading the landing page to a total of 18 seconds to complete loading and some simple digital assets like jpgs/pngs etc take over 5 seconds to transfer the first byte and this is just the bog-standard landing page.

I understand that the database queries can take longer to furnish but even simple queries of the database seem to take a long time to furnish.

I would be happy to help out improve this but maybe there are people already looking at this.

I just can not imagine setting up a groud stations and having to spend inordinate amount of time waiting for web site downloads to verify passes.


1 Like

Hello @cureton !

I just run this locally and I am seeing a tremendous difference to what you got:

Can you post a traceroute to ?

Hi Pierros,

Thanks for testing locally and you comments.

I ran some further tests on a few website performance sites and get similar results to mine indicating that is is unlikely to be my internet connection.

Please see the link below for an example of one test out of Sydney where page loads are 15 seconds

I note that if i run the same test in San Francisco page load times for drop to 6 seconds.

Below is the traceroute results. Given I am in Melbourne the 350mS pings are typical to Europe.

Many thanks,

traceroute to (, 30 hops max, 60 byte packets
1 * * *
2 ( 14.347 ms 14.244 ms 14.142 ms
3 ( 28.568 ms 28.466 ms 28.365 ms
4 * * *
5 * * *
6 * * *
7 ( 251.125 ms ( 250.090 ms 249.676 ms
8 ( 149.648 ms ( 145.553 ms 145.430 ms
9 ( 239.672 ms ( 246.981 ms ( 244.398 ms
10 ( 187.172 ms 187.087 ms ( 170.689 ms
11 ( 313.422 ms ( 313.498 ms ( 316.327 ms
12 ( 193.390 ms 193.783 ms 200.711 ms
13 ( 184.254 ms 192.898 ms 185.019 ms
14 ( 302.513 ms 303.309 ms 304.873 ms
15 * * *
16 * * *
17 * * *
18 ( 307.040 ms 306.962 ms 309.565 ms
19 ( 327.075 ms 323.369 ms 323.316 ms
20 * * *
21 * * *
22 * * *
23 * * *
24 ( 349.260 ms 349.559 ms 353.313 ms
25 ( 360.878 ms 360.370 ms 360.118 ms
26 ( 371.722 ms ( 349.442 ms 348.672 ms
27 ( 348.420 ms 347.768 ms 345.988 ms
28 ( 361.093 ms !X 361.712 ms !X 360.140 ms !X
cureton@uranium:~$ traceroute

cureton@uranium:~$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=42 time=359 ms
64 bytes from ( icmp_seq=2 ttl=42 time=359 ms
64 bytes from ( icmp_seq=3 ttl=42 time=367 ms
64 bytes from ( icmp_seq=4 ttl=42 time=362 ms
64 bytes from ( icmp_seq=5 ttl=42 time=359 ms
64 bytes from ( icmp_seq=6 ttl=42 time=359 ms
64 bytes from ( icmp_seq=7 ttl=42 time=359 ms
64 bytes from ( icmp_seq=8 ttl=42 time=363 ms
64 bytes from ( icmp_seq=9 ttl=42 time=361 ms
64 bytes from ( icmp_seq=10 ttl=42 time=365 ms
64 bytes from ( icmp_seq=11 ttl=42 time=359 ms
64 bytes from ( icmp_seq=12 ttl=42 time=362 ms
64 bytes from ( icmp_seq=13 ttl=42 time=359 ms
64 bytes from ( icmp_seq=14 ttl=42 time=360 ms
64 bytes from ( icmp_seq=15 ttl=42 time=362 ms
64 bytes from ( icmp_seq=16 ttl=42 time=359 ms
64 bytes from ( icmp_seq=17 ttl=42 time=362 ms
^[[A64 bytes from ( icmp_seq=18 ttl=42 time=359 ms
64 bytes from ( icmp_seq=19 ttl=42 time=360 ms
^C ping statistics —

Further to the above testing, I conducted a speed test on the largest digital asset from the landing page on another test site that does geographic comparisons. It appears that there is terrible bandwidth to the Asia pacific region from the hosting service.


Hi! Just to make sure, are you talking about, or

Just for a larger sample size here is the Network loading from Midlothian,IL.

The longest time is the stations location data for the map on the front page.

I also pingged and tracert the site.

Hi Acinonyx,

Absolutely, yes I have not tried and individually as i assumed they would be running on equipment with similar network paths.

Basic tests this morning. 8 seconds to load 6 seconds to load 10 seconds to load

David has a load time of 2.14s for me. below is Ping and Tracert for itself.

As you can see the are on wildly different network paths.

db.satnogs takes 6.57 s to load for me and its ping and tracert is below as well.


I can definitely confirm that being based in Singapore (1Gb Fiber FTTH). That was also one of the reason I stopped vetting observations as using the option to open 50 of them in individual tabs would often time out for half of them.

73’s Martin 9V1RM

1 Like

Same problem here… I have 2 station and I’m planning to close one of them… It’s too much time consuming…

73’s Juan Carlos EA5WA

Let me note that static assets for and are served by servers near your location due to the generous support by Fastly CDN to our project! So, a traceroute is not an accurate metric of the sites performance.

I have no issues with vetting stuff. Every 3 days I vet all of the Fox-1A unvetted obs. And currently I am vetting all the Fox-1B obs. I can easily get thru 10 pages of obs in less then an hour.

Opening 50 observations at once is definitely stressing the database. I have reduced the items per page to 20. I hope it helps for now. In the future we will eventually redesign the vetting UI to become more efficient. Here is a related GitLab issue about it.

Thanks, Yes I can confirm that the Fastly CDN seems to be doing its job and I can download the waterfall images served up by it at around 2.5Mbyte/s once the cache is seeded. There is somewhat of a delay in delivery the first time it is downloaded to seed the cache but that understandable however probably somewhat negates the benefit of the cache for the end user but does offload the traffic from the origin server. The benefits of the caching go to the second user to download the image which is probably limited for these assets in reality. They are not the latest tik-tok video going viral :wink:

On the website there are some large image files that would benefit greatly from being on the fastly CDN as, unlike the individual waterfalls, these are downloaded all the time by multiple people hitting the website. This may take a load of the origin server for I get these files at about 80kB/s currently from the origin server.

I note that the audio .ogg files are coming from (aprox 240kByte/s) and are therefore not included in fastly CDN, however the benefits of the CDN again are for the second comer which would be even less than for waterfalls, so just an observation.

Many thanks for looking at this, there does not seem to be a simple single bottleneck to performance and the unavoidable “time of flight” latency in my neck of the woods only contributes to the problem.

Many thanks, I look forward to contributing to this great project.