My two stations are now capturing a lot of frames, but is time-consuming to validate all the observations scheduled by another users…
Due to this reason, I’m going to put my stations in testing mode until there is a solution.
73’s from EA5WA Juan Carlos
You dont have to vet (validate) any observations that are not done by you. Each observer is primarily responsible for their own observations.
Putting your station to Testing mode is not really addressing the issue. Let us know why you feel this is helping.
I don’t really understand why a good observation with a lot of frames is not auto-validated as before…
There are more than 65,000 unvetted observations in the network, so I don’t think it would be a problem if you don’t vet your own observations.
i have stopped vetting my own observations as well :
- I feel like that the client isn’t stable enough. My UHF station has crashed numerous times or the observations become corrupted. My VHF station was suffering from the same issues before completely dying. I haven’t taken the time to recreate the SD card and re-install it yet.
- Unlike other projects and skimmers I also operate (from CW / RTTY / WSPR / FT8 for RBN & PSK reporter or wsprnet) to AIS and ADS-B trackers that produce results that I’m personally able analyse there isn’t much to do with my SatNogs observations. While tracking objects rotating in space should be exciting, I found that my initial enthusiasms has been wearing off very fast. With the exception of the weather satellites that produce some pictures I can look at, none of the other hundreds of observations I was producing on a daily basis had any information I could use … Looking at signals on waterfalls is fun but not for very long.
I must admit that I often ask myself why i’m even doing this as I feel that there is really nothing in it for me. I would love to have an easy way to understand what that hex data I receive is or stands for … The only time I felt that running 2 stations 24/7 was actually useful was when I received a certificate of appreciation from the Quetzal-1 CubeSat team a couple of weeks back.
73’s Martin 9V1RM
For me, the burden of manually vetting all my observations has been increased drastically with this change. I had set up the auto-scheduler for my two stations to only schedule passes that had a very high probability of providing demodulated data (using specific satellite selections and minimum pass elevation). I only had to vet a few dozen passes per week in the past, and several hundreds now.
For this reason I’ve changed my auto-scheduler settings to only schedule the most important satellites (really only Meteor MN-2) until some form of auto-vetting has been re-implemented. My suggestions would be to finalize the waterfall uploading, to allow scripts to analyze the data in the waterfalls, and to cross check decoded packages between stations that observed the same satellite pass/transmitter combination.
I completely agree with @cgbsat and also with Martin
Martin, have you seen https://dashboard.satnogs.org ? The hex data that stations receive are visualized there.
Hi, No I had not seen the dashboards ! Thanks for pointing them out !
Is there a way to turn on autovetting for only certain sats? For example, I have never gotten a false positive (that I am aware of) from the Fox sats. But there are tons of false positives for CW sats and the NOAA sats… maybe the autovetting should be selectable by the admins? Just a thought.
There are some upcoming changes (check this issue) that will help with this problem and allow us to start working on a better auto-rating of an observation. They will not be perfect at start but it will get better and better by changing/adding rules for auto-rating based on the artifacts (waterfall/audio/data etc). Hopefully during the next week will have them deployed in production.
It took a little longer due to other priorities like the changes in DB, but the changes are in dev and need a little more testing before go to production. Stay tuned…