What am I supposed to do with all the unvetted observations?

Perhaps I’m a bit naive, but I have put my ground station (70) online in the hope that it would be useful. I can see that the auto scheduler happily sinks its teeth into it and schedules a lot of passes.

Several of the passes are vetted Good (by whom?) but many are left as Unknown. Who will vet them? Am I supposed to do it? Frankly, I don’t have the time to sit at the terminal and vet heaps of observations.

In the end I may have to take my station offline again.


Hi @oz6bl,

Initially I have to say that every station is useful and I can give many reasons why if needed. :slight_smile:

Currently, Good observation status can result from either the observation has a specific mode and has Data Artifacts or someone has marked the waterfall as Has Signal. The first one happens automatically, while the second one happens manually from any station user, who can be found if you hover over the Has Signal tag above the waterfall in each observation.

Vetting process is something that is useful for two main reasons, the first and most significant is to know the status of a satellite/transmitter (if it is alive or not), the second is to confirm (not always accurately for several reasons) that a station is in a good status.

However this doesn’t mean that all the observations need to be vetted. Currently, the “responsibility” to vet is on the one who scheduled the observations, but contributors and station owners that contribute to operations try to vet as many of them as possible. Also sometimes they give some priority to satellites that just have been deployed or have stopped/started to transmit after a long period as the status is more critical.

Again, it is not necessary to vet all of them but it would be nice to get there. And we try to get there by bringing a new way to vet that will allow us to automate the process as much as it is possible and make it more useful than it is right now for 3rd party players, like satellite teams and scientists. Also it will help to get there the planned opening of vetting to more people than just station owners, this will allow people that don’t have stations to contribute in this project like people do in citizen science projects.

The current status of this new way is in an early proposal state, which is under discussion between the developers. The next stage is to open the proposal to the community for feedback (by the way if anyone want access to review the current early status of the proposal let me know). Unfortunately due to some other priorities the proposal will not be discussed actively before 27th Jan.

I’m not sure why you feel that you have to take your station offline, is it that it gets too many observations from the auto-scheduler or that they stay unvetted? If the former then we can reduce the rate of auto-scheduling on your station, if the latter, I suggest not worrying too much about the unvetted observations.


Thank you very much for your long and thoughtful answer. It has put my mind at rest. My worry was actually to leave all those observations unvetted but if they are almost as useful as vetted ones then I’ll just leave them as they are.

I’m glad to see my station being used and I have even increased the target utilization to 80%.



Thanks Fredy,

Is processing and vetting an apropos or hallmark application of ML, Machine Learning? We have humans to help train.

I’m not convinced ML is a good candidate for demoding, but for vetting a waterfall image it seems perfect.

As I have time I will contribute. Thanks

1 Like

Machine learning for waterfalls is indeed something that would be useful. As we move to the new waterfall artifact that transmits data instead of an image, I’m not sure if its worth to spend time for machine learning on the old image format. One of the reason we move to the new format is also to make training of an ML application easier and more reliable.

1 Like

I my opinion ML could be useful for the existing contents of the database. There must be zillions of unvetted observations in the DB. I can’t be the only one having them. Having them vetted would be a Good Thing. Wouldn’t that be a good semester project for a ML student?

1 Like

Sure and there are some attempts in the past that worked pretty well for specific satellites. If anyone starts a project like that will need to take into consideration four things:

  1. There are several false positives that are either automatically or manually vetted wrong due to several reasons, like noise frames, not experienced users, not precise rules on how to vet etc. So the training set should be checked manually and verified before it is used.

  2. There are several cases where due to not precise or wrong TLE (new deployments, satellite close to re-enter etc) the waterfalls have signal of the satellite that doesn’t follow the straight line that is expected, these cases should probably not used in the training set.

  3. Waterfall image size is constant, this means that observations with different duration, for example 4min and 13min will have the same waterfall size and if there is signal of the satellite, it will look different and this may affect the ML system. Maybe fitting in the ML system the duration and the mode of the satellite could help but this means that it will need bigger training set, so more work for point 1.

  4. Waterfalls have different max/min power levels, this means that colors are not always the same for the different power levels. This may also affect the ML training.

These points above are the ones we try to fix or improve with the new vetting process in combination with the new waterfall data format.

1 Like