Maintenance mass tool

Hi,

Having the node fully scheduled 7x24 with high usage the maintenance of received passes is becoming more and more a burden, specially if the node for some reason fail and many malformed passes are logged (i.e. passes with no audio).

Other automatic filtering criterias are also possible.

But the need to manually process them is becoming a nuissance.

I really want a tool to do that on a massive way, probable it does exists at the administrator level to automatically collect data in many nodes. I have a fairly large development experience and with some clues on the API to do that or pointers where to start I can develop it, if not for my own usage.

Thanks, Pedro.

1 Like

Hey @lu7did,

I’m fully aware of the trouble of vetting daily or weekly many observations. Currently except the auto-vetting as “good” all the observations with data there isn’t any other way to avoid vetting. However even this auto-vetting is not accurate with many false positives, especially lately with the increase of sensitivity of some decoders/demodulators that allow to get more frames but at the same time due to the small checksum return several false positive frames. There is a discussion of de-activate this auto-vetting, which will increase the burden of vetting but will help with the accuracy of it.

Especially for the failed observations (without audio/waterfall), my thought is that we can create a task that will auto-vet them as failed if a time period has been passed without having any upload from client, for example 30min after the end of the observation. Of course there are exceptions on uploading, so we can handle these by undo the vetting if something is uploaded after the 30min period.

On the rest of the observations there are thoughts for training systems to auto-vet observations using data from waterfalls, we have done some steps on this matter by starting working on waterfalls that will not be images but data, so easier for creating a training set and feed it to a system.

Also there was a big discussion on changing the way we vet and make it simpler and more objective, in order to be easier for people and trained systems to vet. However on both fronts more work is needed to be done. Other option is to use decoders to check if uploaded data are valid frames of a satellite. Other ideas are also welcome to be discussed.

Currently API doesn’t allow observation vetting, I’ve just opened an issue for discussing this option. Maybe in combination with the change of mutli-vetting, vetting from more than one person (and maybe bots?) that is tracked by this issue could lead us to a more automated and accurate result.

The above may take some time to be implemented or discussed fully as there are other ongoing tasks, opened issues, operations and maintenance of a growing ecosystem. Any help on all these fronts is more than welcome and much appreciated!

So, until we move forward to more automated vetting, my suggestion on manual vetting in order to be easier is what I’m doing:

  1. Filter observations by satellite, this help to vet faster as you know what to expect to see
  2. Use the “Open all in tabs” at the bottom of observations page, please note that this will open 50 tabs so it will may be to heavy for your pc/laptop.
  3. Use the shortcuts in observation page to vet, g for “good”, b for “bad”, f for “failed”, u for undo the vetting.

With the above I usually vet 50 observation ~3-5min (including loading time which may vary depending on the location). Of course is not ideal but it is a workaround for now. By the way for the 2nd step I’m going soon to work on this issue that will allow navigation through a list of observations, so there will not be any need to open 50 tabs at once and I expect to make vetting faster.

3 Likes

Hi,

It would be good to flag observations as failed without even looking at them. I noticed that both my stations will sometimes fail if 2 passes are to close to each others. When it happens the only way to return to a functional station is to reboot the Rpi. (or kill the hanging python script that is hanging). By the time I realize I sometimes have 20/30 observations to flag as failed. If we could just select all unvetted observation for a specific period of time that would be very usefull.

Also despite having a 1 Gb fiber here and a full wired network, at least from Asia, the Satnogs server is rather slow and sometimes struggling to serve the content even on a powerfull system.

73’s Martin 9V1RM

Hi,

I would second that satnogs is a bit slow for page loads. Has anyone looking into what the bottlekneck is? (web servers, database etc). I guess money would be limited for hosting.

For vetting I was just thinking about having a quick vet tool. It could be a lower res image that you just click on the left or right of the image or swipe (haha like tinder) if its good for bad and it runs through all your unvetted observations one after another. You could have ignore for the ones that need more investigation. Could this be worth prototyping?

I just had 121 to go through and it took ages in Edge to click the “Open all in tabs” (In Chrome that functionality doesn’t seem to work). The browser nearly died and even using the b or g shortcuts it still took a long time. If I had a few days off doing it, it would become too much of a task.

Failed passes without audio nor data could be auto-vetted as failed by the client. I just created satnogs-client#388 to track this enhancement suggestion.

Thank you for your detailed response.

I believe you’re all struggling with the fine tuning on the “centralized vs. de-centralized” conumdrum on how to manage this network.

I’m not requesting nor suggesting auto-vetting, as it is something that might require considerable heuristics behind to avoid type I or type II situations.

What I’m suggesting is tools in the hands of the station owners that helps to automate the maintenance doing what station owners can do anyway (at much greater effort) such as deleting observations (either actually deleting them or marking them as failed or not success) with some criteria.

Since development is always something requiring effort and this might be directed to other better uses what I’m proposing is an API, that could only be executed by owners doing what owners can do. With that tool owners can do their maintenance chores, only more efficiently.

The “open all in tabs” trick is as older as the injustice but it’s too little as a help. The numbers of manual interaction seems a bit too optimistic, 3-5 minutes per batch of 50 observations is way too unrealistic, 10 minutes or even further driven by server response time and bandwidth limitations seems more realistic. I’m currently have over 900 observations sitting waiting for me to expend 3-4 hours of continuous point and click to classify. Many of them are known failures. It’s hard to expend such time.

Having an API available experienced developers like me can develop tools, and then share them if they proove to be useful, or keep to themselves if not, to classify with known patterns such as not having sound register of a pass or certain satellites or certain patterns (i.e. eclipse pass of a satellite with dead batteries) or whatever else.

You should not think too much on analyzing the impact of things a determined owner can already do only expending much time and effort.

Autoschedulling API and tools are a good example, a clean implementation allows for a better management of our own node instead of the ridiculous manual point and click with a two day horizon used before.

Administrator has access to the data base and I have no doubt in my mind they can mass process things, so perhaps it’s time for them to consider to release that grip to the station owners on their own station. Something in that tune is far (far) less effort to implement which any GUI or user driven interface to do anything, its just a wrapper to the database showing a station owner view of their own data with some controlled interactions over them in order not to screw somebody´s else data.

A concrete proposal. An API to be able to list all past passes with some filters such as satellite (NORAD), current status (planned, pending, vetted good, vetted bad or even failed) or a combination of them, within some time window (expressed in datetime from/to), and allowing to mark them as FAILED.

This way you have zero risk of falling into a type I or type II problem while vetting and leaving a corrupt vetted pass.

Simple scripting will do the rest.

Thanks for your attention, Pedro LU7DID

1 Like

I use Chrome on 4 or 5 different machines and the “open all in tabs” works just fine. You have to whitelist satnogs.org to allow pop-ups, however.

1 Like

Hi,

To my knowledge there is no such tooling for mass processing (other than hand-crafted SQL queries, but this doesn’t scale), but I might be wrong. Maybe @fredy can give some further details here whether there are regularly used tools/scripts for admins for mass-vetting?


Except from the fact that vetting is not possible yet, the (unfortunately still undocumented) satnogs-network observations API endpoint provides exactly this. I.e. the following query returns all unvetted observations on station 499 since 2020-01-01:

$ curl -s "https://network.satnogs.org/api/observations/?id=&ground_station=499&vetted_status=unknown&start=2020-01-01T00%3A00%3A00&end=&format=json" | jq '.'
[
  {
    "id": 1982465,
    "start": "2020-04-07T09:03:52Z",
    "end": "2020-04-07T09:15:55Z",
    "ground_station": 499,
    "transmitter": "gtzv79Zp7kPymUekFaA2w4",
    "norad_cat_id": 40906,
    "payload": null,
    "waterfall": null,
    "demoddata": [],
    "station_name": "LU7DID",
    "station_lat": -34.804547,
    "station_lng": -58.387932,
    "station_alt": 40,
    "vetted_status": "unknown",
    "archived": false,
    "archive_url": null,
    "client_version": "",
    "client_metadata": "",
    "vetted_user": null,
    "vetted_datetime": null,
    "rise_azimuth": 14,
    "set_azimuth": 191,
    "max_altitude": 86,
    "transmitter_uuid": "gtzv79Zp7kPymUekFaA2w4",
    "transmitter_description": "CW TLM 22wpm",
    "transmitter_type": "Transmitter",
    "transmitter_uplink_low": null,
    "transmitter_uplink_high": null,
    "transmitter_uplink_drift": null,
    "transmitter_downlink_low": 145790000,
    "transmitter_downlink_high": null,
    "transmitter_downlink_drift": null,
    "transmitter_mode": "CW",
    "transmitter_invert": false,
    "transmitter_baud": 22,
    "transmitter_updated": "2019-04-18T05:39:53.343316Z",
    "tle": 572027
  },
  ...
]

So, I suggest to discuss the implementation of “vetting via API” in satnogs-network#724 linked above, so that once this is done you can start building the voting tool you suggested.

Sincerely,
Fabian

1 Like

Hi Fabian,

Scripts and tools counts as “administrator tools” I didn’t meant the administrators has any fancy GUI, but I’m sure they don’t run by the click on “open all tabs” either.

Again, I’m not (¡NOT!) advocating about “vetting” automatically, in order to do that a fair analysis is required whose heuristics are probably beyond any simple task. What I’m asking is for a tool that helps to profile the queue of pending to be processed passes with different criterias (seems this tool already exists, but as I assumed in my post hasn’t been published beyond administrators) and be able to mark it as FAILED or leave it as it is, pending of manual vetting as good or bad.

The API you show list is half what is needed, to have a list of unvetted passes wouldn’t be of much help without the capability to mark them as FAILED if an observation falls into some criteria (i.e. “waterfall”:null).

Having a resource that does 50% of what I’m asking but at the same time having no way to understand how to use it because it’s undocummented I would assume is something close to a joke, ¿isn’t?

Thanks, Pedro LU7DID

As @kerel said , except the raw SQL queries that is not wise to use in production sites, there isn’t such a tool, neither GUI or simple admin actions. As all users we (admins) use the “Open all in tabs” functionality in combination with the filters in observations page. For example filtering all the observations between two dates that have not waterfall and audio, “Open all in tabs”, and then hit “f” for failed and “ctrl + F4” or middle click for closing the tab, until we have all the tabs closed. This is all we have for now, until we find some time to implement something better.

There is an ongoing work in both SatNOGS Network and SatNOGS DB to generate automatically an API client and documentation for each project by using/following the OpenAPI schema. Until then the best way to understand how the API works is by visiting the https://network.satnogs.org/api and https://db.satnogs.org/api.

For the rest 50% of the needed API let’s discuss it (why, what and how) at https://gitlab.com/librespacefoundation/satnogs/satnogs-network/-/issues/724.

1 Like

Hi Fredy,

Thanks for your answer.

This is what I got when accessing these URL, is this what you meant?

Just for the record, I do insist I’m not asking for a way to perform “vetting” but to mark as failed a given pass with some heuristics related to the satellite, the time, the lack of waterfall or other factors that might emerge from a query. Vetting as “BAD” or “GOOD” probably are outside of a short term realm of possibilities.

Regards, Pedro LU7DID

Hi,

Either the site is having network issues or I’m not allowed to post comments.

Regards, Pedro LU7DID

Gitlab status looks good at the moment https://status.gitlab.com/

My network is otherwise operating Ok.

Regards, Pedro LU7DID

Yes clicking on each of the links, moves you to each endpoint where you can also check filters you can apply and what info you can get.

Currently marking as failed is done by vetting an observation as failed, so we will need to allow in general vetting through the API. Maybe we can restrict it but this is a matter to discuss.

That’s strange, let us know if that insists. Also try to refresh the page, it may help.

Hi Pedro,

the problem here is that by creating an API for vetting, we open up a door to easily automate the task which is something we actually want to prevent at this point as small programming errors can lead to huge amount of false data. Long term goal of “vetting” is to be used with machine learning in order to automatically classify observation data. We initally need to build a training set which can only come from humans.

I believe I’d stated several times that automated vetting is out of reach for a while, I’m aware of the implicancies of type I or type II characterization on passes.

This is exactly why, and I repeat again, my suggestion is to allow that only to mark FAILED passes, this way you have a zero risk of generating wrong data.

The same API probably would need to enable the other way around, so return an FAILED pass into an UNKNOWN (pending to be vetted), so again, zero chance to do allow barbarians at the gate to entry.

Regards, Pedro LU7DID

Hi,

+1, I really like this suggestion!

Best wishes,
Fabian

We can do this check in network. No need for an API. In fact, the FAILED flag was a big mistake to include as a vetting option. A failed observation should be automatically detected by the network based on very specific criteria and be excluded from vetting.

1 Like

Again, the unresolved tension “centralized vs. de-centralized”.

I believe facts are mixed with opinions in your response. “We can do…:” but fact is "you are not " doing it, and I believe there is no statement of direction on whether the network will do it.

Meanwhile, failed observation is a known state of the system.

Automatically detected by the network might or might not be a simple thing for states other than FAIL.

Today station owners needs to pinpoint by hand that, and what I’m asking is to help that chore to be done programatically. So while you discuss if a laser to do it needs to be green or red station owners still uses a hammer.

Nothing prevent once some initial API is in place to mark failed or reverse it to apply some yet to be developed heuristics for more automated vetting, which I believe will take much more discussion.

As a station owner I would like to have a say on what is the “specific criteria” to be used, if at all possible I would like to define it for my station.

Regards, Pedro LU7DID