Bulk scheduling


I notice on my stations some bulk scheduling of many observations. Surely this must be done by some automated process. Is this documented somewhere, how to do it?

Also what is the point of scheduling this many observations if you don’t have the time or inclination to vet these observation?

Is this done just to fill the database?

Is this done just to score brownie points for the most observations/data frames at someone else’s expense? Surly some credit should go to the station builder not just to the observer?

Can I place a limit on these bulk observations to limit my ISP charges or is the only other option to go into stand alone mode or testing mode?

Not bitching just curious. (“bitching” Aussi slang for complaining)

Bob vk2byf


Except if you use the auto-scheduler there isn’t, at least a known, automatic way to schedule. If you see a lot of observations from Dimitris, it is because he is responsible of operations in SatNOGS. This means that he tries in his spare time to schedule observations and vetting them, by keeping in mind of sharing the time evenly to all satellites and if there is a need (special event, new deployment, satellite stopped working unexpectedly etc) to give some priority. If you are interested on auto-scheduling check this thread and this one, still a work in progress but getting closer.

The point of scheduling observations is to be able to observe all satellites as much as possible, ideally this means 24h/day. So we this goal we need to use the resources of the network (observations/stations/people) in the best way to achieve the best results. The concept is to automate the whole process but still need a lot of work to get there.

My quick way to check how much the network have grown is the ISS SSTV events, it’s not accurate as it shows only VHF stations but it’s enough for an impression. I would love to see similar analysis about other satellites/stations. So, 3 years ago, the coverage of an ISS SSTV event was about 5-10% with having the network to get a couple of the total images that were transmitted. Now we are able to have a coverage around 50-60% with many stations as backup mostly in Europe, North America and Australia/New Zealand.

About vetting, in this thread you can find some thoughts on the matter. What have changed by then is that the new permissions are implemented, so any station owner is able to vet observations.

Unfortunately we didn’t have any great progress on automate it, however there are open discussions on improving it and reach to a point that vetting would be easily done automatically.

In general vetting goes well, the unvetted observations go back to the start of June, for me it’s not that bad and as I said in the linked thread above, there is nothing bad of having unvetted observations.

My suggestion for everyone… read the vetting guide, vet your observations, if you have more time vet the observations of your own station and if you have even more free time vet observations on other stations too. :slight_smile: The last one it’s easier if you choose one satellite and vet all its observations, as you know what to expect in waterfalls.

Observers don’t take any credit on the observations, like something as how many decoded data have caught in their observations. And in my opinion they shouldn’t as soon they will be replaced by auto-scheduling (not entirely but this is another discussion :slight_smile: ).

On the other hand, for those satellites/transmitters we upload data in db, credits are going to station. As the metrics for stations, db leaderboards need some revamping and need to be more accurate based mostly in quality and not in quantity, for example showing how many of the uploaded data were decoded by decoders and other ideas that are already discussed in thread like this one.

If you need a hard limit, unfortunately the answer is no. For a soft limit, you can add a comment about it in your station description and if you see that it is not respected then try to contact people scheduling on your station. Dimitris (@BOCTOK-1) should be the more active, so let him know how many observation per day is a good number for you. In the future with auto-scheduling and utilization factor, owners will be able to set how much they want their station to be utilized by the auto-scheduling algorithm.

I would like in this point to take the opportunity from this question to add some general comments (and make my post bigger and more boring :stuck_out_tongue:). This project is based on the community and its contributions, this means that people spend money and time to help the project and this is much appreciated and makes SatNOGS a successful project. There isn’t any intention for bringing anyone in a difficult situation, however as this community goes worldwide, I think it’s not possible everyone knows everything about the conditions of setting up a station in another area.

For example, I have no idea about weather, internet connection, electric power conditions or how easy/difficult is the access to hardware for building a station in several areas. So, don’t hesitate to communicate any issues you have and I’m sure the community will find a way to tackle them and benefit in total from it.

Sorry for the big post but as the community gets bigger and bigger I think we don’t communicate enough what we are trying to do, how and under which circumstances. So I catch the opportunity in these kind of posts to make people aware of the general idea and that the project is not perfect but is getting there with the help of the community!


Thanks fredy for clearing that up for me.
I will now start vetting some of the observations that are of interest to me and I would have scheduled them myself if someone hadn’t beaten me to it.

Internet in Australia starts at about $60AUD per month. I have not yet exceeded my monthly Gb quota so not a problem. I’m happy to make my contribution for the greater good. I think this is a fantastic project and the support is very good too. I’m a hardware man not a programmer.
73 Bob vk2byf


Keep an eye on it and let us know if there is a need on reducing scheduling and what would be a good scheduling rate. :wink: