Also just because its automatically vetted does not mean no one will see them. I always check out all my NOAA Obs using the filter settings. I also have looks thru other sats in the full data base,
I believe that with automation observations for dead satellites will be limited to a minimum as they will be low in priority list.
Bad observations count as positive for station’s statistics, only failed observations count as negative.
There are indeed difficulties in vetting an observation as failed or bad, as it’s not clear each time if the satellite doesn’t transmit or the station doesn’t work correctly. But I believe that this can be improved in the future.
Any thoughts on this?
I have already contacted people from @motionlab, and gave them some examples of wrong vetting they did, also linked the wiki page that explains the right vetting. https://wiki.satnogs.org/Operation#Rating_observations
I understand your decision to want your station to be utilized only when there is a need or a specific event and this is why already 100% utilization of your station has been stopped as you requested.
However I want you and the rest of the station owners to consider this:
SatNOGS was created with the goal to offer data to satellite owners and researchers (and not only), with this in mind the ideal SatNOGS would be able to observe satellites 24 hours per day. As we are far from ideal, we need to utilize the network as much as we can and collect as many data as possible.
This is why 100% utilization is not a contest but a way to fulfill SatNOGS mission. Of course having less data/observation will not make or lose money as you said but it will bring less data to those who (will) use them, who (will) try to make conclusions and who (will) rely on them. I want to point also out that “no transmission” data is also data, this is why from time to time there are observations of dead satellites.
And let me be more specific and give some recent examples that getting more data would be valuable:
UPSAT, the satellite is currently in safe mode and from time to time it wakes up and transmits data. So, in an ideal SatNOGS with 24h/day observation, we would be able to get more data in order to analyze better why is going into safe mode. The current situation is that we received it only two times the last year from random observations.
PICSAT, this satellite was working and transmitting for several days but suddenly stopped. A 24h/day observation maybe would have helped to understand why it stopped. Unfortunately the last data SatNOGS and other amateurs got weren’t enough to reach a safe conclusion.
And there are more examples, and more to come as there are estimations for increasing deployment of cubesats and other objects in LEO and other orbits. And given the fact that many of them are built from people that create a satellite for first time, giving them as more data as we can would be very valuable in their work.
Anyway, as I said your request is understandable and respected and SatNOGS community is thankful for any contribution, no matter how small or big it is.
One very important factor to keep in mind is that most, if not all, rotator-based ground stations in the network are not constructed for 100% utilization. Thus, running them 100% of the time may lead to many stations having a very low MTBF. I know many people like fixing their stuff all the time, personally, I prefer not to do it more often than once every 5 years. It would also look bad in the network statistics.
I seriously doubt that we will be able to attract many “big ground stations” if they are told that their station will be used 100% of the time. Therefore, it’s a good idea to let station owners specify a targeted utilization percentage. For now, telling the obs team to take it easy on my GS has worked for me, so I am not complaining
There is also another downside with the present practice. With ~1000 observations per day it is practically impossible to do any meaningful quality control. And that is, in my opinion, a big problem since SatNOGS is still very much under development.
Technically, we don’t know how much % of the data in the DB is useful. It could be 90%, it could be 10%, or it could be something in between. Consider what effect that can have on new people looking around to see whether they should use SatNOGS for their mission. Will they be looking at the number of observations per day or the percentage of useful observations in the last 10-20 observations? Probably the latter, since space flight has always been about quality and not so much about quantity.
This could be indeed a user editable attribute of the station, that would help with stations that are having issues, especially when automatic scheduling comes live.
Yet this is what we do now (@BOCTOK-1 and others are working round the clock to make sure we are as sanitized as possible) . The only fair comment here is that the current method does not scale. but there are ideas on how to scale around that too: 1. we could crowd-source the vetting and/or 2. we could partially or fully rely on ML algos to vet. (and yes we do have the scale to do that now).
You mean Network maybe? Cause in DB we are pretty sure that they are useful (given that people export them, analyze them, and now we can dashboards around them). Keep in mind that besides CW, almost all other modulations and encodings do include some sort of CRC so we can be on the safe side that this is valuable data.
Oh they do. And they reach out And they work with us to create dashboards and integrate with SatNOGS. I hardly believe that anyone is having “quality issues” with Network as it is. (especially given the alternatives). Could we be better? Sure! That’s why we are investing more on our development and outreach. Should we stop and go for handpicked quality? Absolutely not!(imho)
I say, bring in the scale, and while we do that, let’s be clever on how we qualify and validate our operations and data.
My understanding of “100% utilization” target is that when a ground station is available to SatNOGS, it should be utilized fully and not sitting idle waiting for someone to schedule something.
The fact that there is no way for the owner to define such station availability to SatNOGS, nor a way to easily override already scheduled observations is purely a limitation of current design and implementation and not a SatNOGS network policy. These limitation have been identified long time ago and there have been some discussion on how they should be fixed properly. One thing is certain: that any long-term solution to such problems can only go through an increase of automation level.
Until then, we can only rely on manual communication between community members for advertising ground station availability, status or intended use.
That’s a very good idea.
Something like 3 different settings:
- utilize 100%
- only targeted observations (ISS contacts, new sats, troubleshooting)
- only emergencies / no 3rd party scheduling desired
I’d use the 1st setting for my station, it’s a purpose built setup, I’m not using a rotator, everything is solid-state and my network connection is fast and unmetered.
So no problem to use it to it’s full potential.
And also maybe an option to have you station be disabled between X and Y time (example when you are free on weekends and want to use the radio)
Yes, I meant the database in the network
I was indeed thinking about verification and validation when I wrote “quality control”. Are we doing the right thing and are we doing things right? To me, it seems to be easier to do such things on a small scale.
I just wanted to apologize for this and I am very very sorry for causing trouble. The reason why I scheduled to many observations was to help the system to have more recordings and spend some time to manually vote them in order to bring more data into the database. I am incredible sorry to cause trouble, my misunderstanding was that I thought by voting them failed it would save resources because the data is not stored or processed. I am very very sorry for causing trouble. Please see Motionlab Berlin vetting as failed instead of bad