Looks like we are ‘putting the horse before the cart’. Would it be better to add the auto vetting system first then the auto scheduling?
Considering with a good enough signal most sats should auto vet already. (Fox sats and all sats we have decoders for.) and if they don’t auto vet then we would have to look at the water fall to see why anyways.
I am just wondering if this has to do with how you don’t like vetting unvented observations on your station from other people. Just because they aren’t observations you care about.
Very, very few of my station’s observations are auto-vetted (even with strong overhead passes of the Fox satellites, where voice QSO’s seem clear as day - I’ll be lucky to get one or two decodes).
Frankly, I think improving auto-vet across the broad spectrum of modulations and frequency/tle drifts is likely to be a never-ending battle. Of course, I can barely code a shell script, so my opinion in this area is worth only 1 cent.
I would much prefer the auto scheduler over improved auto vetting at this point.
I would like to make a quick comment on this.
I agree it’s not a big deal to have a couple of observations misvetted as statistically it will be a small amount compared with the right vetted observations.
Also, except the machine learning auto-vetting system, there are thoughts of enabling multi-vetting, so using statistics for each observation vetting. This will mean that more people will be needed for vetting, and this is one of the fields we can use citizen science help.
For example we can use platforms like zooniverse (recently heard of it from @pierros ), where with some basic instructions people could easily vet and give several info about each observation (like other signals, terrestrial or satellite ones, in the same pass or interesting patterns).
Finally getting closer to have a more detailed waterfall, which will be expressed as data and not as an image, will allow us to vet easier and more accurately the observations.