Hi Guys, I really like what you are doing and have build a monitoring station, however the thing that I find so frustrating in using the satnogs website is that is performance is horrendous.
Just loading the landing page to a total of 18 seconds to complete loading and some simple digital assets like jpgs/pngs etc take over 5 seconds to transfer the first byte and this is just the bog-standard landing page.
I understand that the database queries can take longer to furnish but even simple queries of the database seem to take a long time to furnish.
I would be happy to help out improve this but maybe there are people already looking at this.
I just can not imagine setting up a groud stations and having to spend inordinate amount of time waiting for web site downloads to verify passes.
I ran some further tests on a few website performance sites and get similar results to mine indicating that is is unlikely to be my internet connection.
Please see the link below for an example of one test out of Sydney where page loads are 15 seconds
I note that if i run the same test in San Francisco page load times for satnogs.org drop to 6 seconds.
Below is the traceroute results. Given I am in Melbourne the 350mS pings are typical to Europe.
Many thanks,
David
traceroute to satnogs.org (83.212.169.121), 30 hops max, 60 byte packets
1 * * *
2 10.246.140.233 (10.246.140.233) 14.347 ms 14.244 ms 14.142 ms
3 59.154.112.249 (59.154.112.249) 28.568 ms 28.466 ms 28.365 ms
4 * * *
5 * * *
6 * * *
7 59.154.18.20 (59.154.18.20) 251.125 ms 59.154.18.24 (59.154.18.24) 250.090 ms 249.676 ms
8 59.154.18.6 (59.154.18.6) 149.648 ms 59.154.18.12 (59.154.18.12) 145.553 ms 145.430 ms
9 203.208.174.49 (203.208.174.49) 239.672 ms 203.208.177.129 (203.208.177.129) 246.981 ms 203.208.174.49 (203.208.174.49) 244.398 ms
10 ae28.mpr1.lax12.us.zip.zayo.com (64.125.35.197) 187.172 ms 187.087 ms 64.125.35.201 (64.125.35.201) 170.689 ms
11 ae13.cs2.lax112.us.eth.zayo.com (64.125.27.42) 313.422 ms ae14.cs2.lax112.us.eth.zayo.com (64.125.27.38) 313.498 ms ae13.cs2.lax112.us.eth.zayo.com (64.125.27.42) 316.327 ms
12 ae21.mpr1.slc2.us.zip.zayo.com (64.125.26.19) 193.390 ms 193.783 ms 200.711 ms
13 ae4.mpr2.slc2.us.zip.zayo.com (64.125.26.165) 184.254 ms 192.898 ms 185.019 ms
14 ae11.cs1.den5.us.zip.zayo.com (64.125.26.42) 302.513 ms 303.309 ms 304.873 ms
15 * * *
16 * * *
17 * * *
18 ae4.mpr1.lhr15.uk.zip.zayo.com (64.125.28.195) 307.040 ms 306.962 ms 309.565 ms
19 linx.ix.geant.net (195.66.226.161) 327.075 ms 323.369 ms 323.316 ms
20 * * *
21 * * *
22 * * *
23 * * *
24 ae3.mx2.ath.gr.geant.net (62.40.98.151) 349.260 ms 349.559 ms 353.313 ms
25 grnet-ias-grnet-gw.mx2.ath.gr.geant.net (83.97.88.70) 360.878 ms 360.370 ms 360.118 ms
26 ypedcfs2-eier-1.backbone.grnet.gr (62.217.100.105) 371.722 ms ypedcfs1-eier-1.backbone.grnet.gr (62.217.100.101) 349.442 ms 348.672 ms
27 gnt7-1115.yp3.grnet.gr (62.217.92.74) 348.420 ms 347.768 ms 345.988 ms
28 vm1.satnogs.org (83.212.169.121) 361.093 ms !X 361.712 ms !X 360.140 ms !X
cureton@uranium:~$
cureton@uranium:~$
cureton@uranium:~$ traceroute bt.co.uk
cureton@uranium:~$ ping satnogs.org
PING satnogs.org (83.212.169.121) 56(84) bytes of data.
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=1 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=2 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=3 ttl=42 time=367 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=4 ttl=42 time=362 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=5 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=6 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=7 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=8 ttl=42 time=363 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=9 ttl=42 time=361 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=10 ttl=42 time=365 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=11 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=12 ttl=42 time=362 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=13 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=14 ttl=42 time=360 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=15 ttl=42 time=362 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=16 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=17 ttl=42 time=362 ms
^[[A64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=18 ttl=42 time=359 ms
64 bytes from vm1.satnogs.org (83.212.169.121): icmp_seq=19 ttl=42 time=360 ms
^C
— satnogs.org ping statistics —
Further to the above testing, I conducted a speed test on the largest digital asset from the satnog.org landing page on another test site that does geographic comparisons. It appears that there is terrible bandwidth to the Asia pacific region from the hosting service.
I can definitely confirm that being based in Singapore (1Gb Fiber FTTH). That was also one of the reason I stopped vetting observations as using the option to open 50 of them in individual tabs would often time out for half of them.
Let me note that static assets for network.satnogs.org and db.satnogs.org are served by servers near your location due to the generous support by Fastly CDN to our project! So, a traceroute is not an accurate metric of the sites performance.
I have no issues with vetting stuff. Every 3 days I vet all of the Fox-1A unvetted obs. And currently I am vetting all the Fox-1B obs. I can easily get thru 10 pages of obs in less then an hour.
Opening 50 observations at once is definitely stressing the database. I have reduced the items per page to 20. I hope it helps for now. In the future we will eventually redesign the vetting UI to become more efficient. Here is a related GitLab issue about it.
Thanks, Yes I can confirm that the Fastly CDN seems to be doing its job and I can download the waterfall images served up by it at around 2.5Mbyte/s once the cache is seeded. There is somewhat of a delay in delivery the first time it is downloaded to seed the cache but that understandable however probably somewhat negates the benefit of the cache for the end user but does offload the traffic from the origin server. The benefits of the caching go to the second user to download the image which is probably limited for these assets in reality. They are not the latest tik-tok video going viral
On the satnog.org website there are some large image files that would benefit greatly from being on the fastly CDN as, unlike the individual waterfalls, these are downloaded all the time by multiple people hitting the website. This may take a load of the origin server for satnogs.org. I get these files at about 80kB/s currently from the origin server.
I note that the audio .ogg files are coming from us.archive.org (aprox 240kByte/s) and are therefore not included in fastly CDN, however the benefits of the CDN again are for the second comer which would be even less than for waterfalls, so just an observation.
Many thanks for looking at this, there does not seem to be a simple single bottleneck to performance and the unavoidable “time of flight” latency in my neck of the woods only contributes to the problem.
Many thanks, I look forward to contributing to this great project.