Changes on rating of observation and vetting of waterfall

It’s a long time we had discussed that observation vetting needed a reform. Recently we had disabled auto-vetting as it affected in a negative way other parts of the project.

Today, we are moving forward to a new way to rate observations, which will help us to develop tools and smarter processes to automate the whole procedure. The main difference is that from now there will be an observation status and for all the artifacts of an observation will have a separate vetting status.

Vetting artifacts will start from Waterfall but, in the future, it will be expanded to the other artifacts too. This will give us the opportunity to improve and develop methods for vetting each artifact separately, more accurately and in some cases automatically.

Vetting waterfall will be quite similar to the previous vetting system. For more details please visit the wiki page that explains in detail waterfall vetting.

While waterfall vetting will remain for now a manual process, observation rating is fully automated. An algorithm that checks several criteria based on artifacts will try to rate an observation. This will not always be possible, so manual intervention will be needed in order to vet artifacts and help rating algorithm to decide.

Algorithm for automated observation rating is not perfect and it is still a work in progress, so if you find an inconsistency you can open an issue about it or fix it in SatNOGS Network repository. More details about rating can be found in the wiki page.

Note about auto-vetting:

A similar process to auto-vetting, is now the algorithm for rating observations. Observation with data (demodulated/decoded frames or decoded images) will be enough for rate an observation as “Good”, except if the mode of the observation’s transmitter is either CW or FM. However this may change in the future. Of course, as in the past, we will have cases, in which noise frames will be produced and will give a wrong result, in this cases manual vetting of artifacts will play a significant role, until we improve the rating algorithm and artifact vetting.

7 Likes

One known issue, is that older observations with only data or audio but not waterfall can not be rated. In the next days we are going to run all these non-rated old observations through the new rate algorithm so they get rated.

2 Likes

What about “Failed” rating when there is no waterfall? Is possible to include it in the vetting algorithm?

Thanks

When there are no artifacts or malformed artifacts observations are rated as failed. Is your case one of these two? If not please open an issue, with details and example observations.

Another thing that we observed after the changes is that there are some observations vetted as failed, however they have a good waterfall. These cases are rated as failed due to the significant difference between the audio artifact duration in comparison with the scheduled duration. In this case if you vet the waterfall as good then the observation will become good.

To spot and vet the waterfalls of these observations you can use the filters in observations page. Here is an example of all failed observations that have waterfall but it is not vetted.

I’m not sure currently why the audio file has short length, this is something we need to determine, I’m going to open an issue to investigate this.

Here are some examples of observations that have a status of “Failed”, vetted as “no signal in waterfall” and have short audio files:

https://network.satnogs.org/observations/2726248/
https://network.satnogs.org/observations/2730794/
https://network.satnogs.org/observations/2730793/
https://network.satnogs.org/observations/2726244/

Interesting to note these started to appear after 8/20

This started to happen after 2020-08-20 as this is when we landed the changes in production. We haven’t yet rate older observations that were unvetted, this will eventually happen.

I guess we are going to discuss this in the next developers meeting in order to find a solution.

About the short duration audio files, there was a bug in the flowgraphs that has been solved in the latest client, v1.3.4.

2 Likes

Thanks for the update @fredy!

It looks like the wiki needs an update through – it is still using the old waterfall color scheme.

1 Like

After the change I got a few observations, which are marked as “Failed” but should be “Without Signal” as there are obviously signals of other satellites in it. Manual vetting does not change this status.

Sometimes there are even short valid signals in the waterfall - fortunately manual vetting with “Has Signal” does override the “Failed” status.

Examples of wrong “Without Signal” can be found in the following query:

https://network.satnogs.org/observations/?future=0&good=0&bad=0&unknown=0&norad=&observer=&station=378&start=2020-08-25+08%3A38&end=2020-09-14+08%3A38

Example of wrong “Has Signal”:

https://network.satnogs.org/observations/2633307/

hi @mdz, thanks for the report, this is a known issue you can track its development at https://gitlab.com/librespacefoundation/satnogs/satnogs-network/-/issues/756. In order to avoid these corrupted audio observations, please update your station to the newest client version (1.3.4).

Please have in mind that vetting changes are not finalized, this is an ongoing process to move from the old waterfall vetting to something more useful and meaningful rating of all artifacts and showing observation status using different indexes. For more discussions and updates stay tuned. :slight_smile:

1 Like

Hi fredy, thanks for your quick and helpful response! I just recongnised the new filtering options and I think like it :slight_smile:

Back to my problem: I rebased my client to version 1.3.4 and did a reboot but still got the problem of a ‘Failed’-rated passes, which should be ‘Bad’ or ‘Unknown’.

https://network.satnogs.org/observations/2839061/

I recognised there is another pass starting less than three minutes after that, scheduled by the Auto Scheduler. Could there be an issue with something (processing/upload) taking longer than the gap to cause the ‘Failed’-status? Is it worth submitting this behaviour as an issue or should I just wait for the next update?

Update: I still get plenty of these ‘Bad’ passes on both of my stations - with signal and without. I could not identify a pattern yet.

I left them unrated, so you can see them all:

Station 355: https://network.satnogs.org/observations/?future=0&good=0&bad=0&unknown=0&norad=&observer=&station=355&start=&end=

Station 378: https://network.satnogs.org/observations/?future=0&good=0&bad=0&unknown=0&norad=&observer=&station=378&start=&end=

I am seeing ‘Failed’ observations even when there is a spectrum and data. i would rather just vet my own observations?

@nigel please read this recent thread about the vetting topic and let me know if that answers your questions.