Without going into details of a proof of concept I’m toying around with, let’s think of automatic scheduling via network.satnogs.org as a small part of the entire SatNOGS ecosystem, and imagine the scenario where the following are true:
- overriding of priorities and manual observation scheduling is always available, as @pierros already proposed
- decoded frames from network.satnogs.org are making it to db.satnogs.org
- frames from db.satnogs.org are decoded into a time-series database warehouse for consumption and visualization
- said time-series database should be assumed to have irregular time series data because of all of the factors involved in collecting this data (in other words, we can’t assume today that we expect a frame of data from satellite X to arrive once an hour… and this bullet point is the problem I propose we solve given the previous bullet points are already implemented)
Now, let me propose that our end goal be the regular collection of data, as frequently as possible based on our capabilities, telemetry forwarder contributions, and network. Regular, and frequent. Consistency & regularity trumps frequency (1 data point every 4 hours is better than 1 data point at 1 hour, then one at 8, then one at 9, then one at 20). We would take what would today be an irregular time series data and try to make it regular through automation.
In my scenario above we would have the data we need to calculate what transmitters to collect from next - but it would be calculated based on the results that end up in the data warehouse, trying to create a consistent stream of data (based on our capabilities and the satellites up there). This way, the results are what matter.
And, since we focus on the results at the end, our automation would account for the data we are collecting through other means outside of network.satnogs.org. (we have over 18.6 million frames in db.satnogs.org today and over 193,000 observations in network.satnogs.org) Today, we have no control over the satellites that are focused on for users contributing to db.satnogs.org with telemetry forwarders… we may be able to influence them socially but let’s assume they are a fixed value. There are roughly 150 contributors providing decoded data for some 100 satellites. Some of the satellites we would never touch with the network automation because we already get enough data elsewhere (let’s say STRAND-1). Or, maybe it becomes the case that we have seen good data for STRAND-1 consistently every 4 hours and suddenly it has been 5 hours without data, if it is possible for the network to fit that in to fill in the gap then we should do it, and this would be an automated process.
(take “4 hours” and “5 hours” as a very rough example - I haven’t put any real calculations into this and it very well may be that our expectations would be “once a day” per satellite… and this also depends on the network growth meeting or exceeding the growth of transmitters out there)
Anyway - I still think the “prioritized list” method is a good approach to take, and maybe this bigger picture method becomes a v3, I just want us to take a step back, look at the end result (data), and work backward from there to see what works.
In this scenario, a “prioritized list” becomes less of a list for the individual ground stations to pick from, and more of a scheduling change between “1 frame every 24 hours” for one satellite vs “1 frame every 4 hours” for another… Now, this puts the onus of scheduling back on the network, but if we are to think at scale 5-10 years from now the results would be more consistent.