Regarding Observation 2371034 … The signal strength drops across the entire received bandwidth around the 180 second mark, as if maybe the LNA dropped out. LNA seems to be working fine… or at least it lights up when I activate it via command line.
I don’t think this observation should be marked “failed”, as the satellite’s cw signal can clearly be seen in the early portion of the waterfall.
As for the strength drop out, I have observed this on all of my RTL-SDRs when there is a very strong signal near to where I am tuned, but not close enough to be shown in the active waterfall. Essentially, the strong signal blows out the front end of the SDR and you lose all reception at your frequency of interest. I’m not sure there is much to be done to fix this other than adding physical filters.
Well… I was trying to copy the SSTV signal.
I’m open to suggestion on that My rationale is that in my head the difference between “failed” and “good” comes down to whether the equipment performs. In this case, I think the multi-sideband-looking signal that shows up after the gain drop is the SSTV signal, but they’re pretty faint.
Incidentally the LNA is specifically built and filtered for the 430 band. You may be right about overload but if so I think this is the first time it’s happened. Hmm… worth keeping an eye on.
Well, if you follow the vetting guidelines posted here https://wiki.satnogs.org/Operation#Rating_observations
Then you should mark this observation “good”. Here is what the link specifies:
Categories of observations:
- You should mark observations as “Good” when it is clear from the waterfall and/or audio recording that a satellite is present. Keyboard Shortcut ‘g’
- You should mark observations as “Bad” when by examining the waterfall and/or audio it is obvious that there was no satellite detected in this observation. Keyboard Shortcut ‘b’
- You should mark observations as “Failed” when the station failed entirely: the waterfall and/or audio is empty or not present, or there’s too much noise. Keyboard Shortcut ‘f’
The only point on which we might disagree is this: are the good/bad/failed ratings meant to grade a specific signal the satellite emits (telemetry, beacon, etc) or to grade whether the satellite is transmitting anything at all? The relevant section of the wiki states,
The main purpose of validating observations is to know if the satellite/transmitter is alive, if it transmits in the listed frequency/ies, and if the TLEs we have are accurate.
Now, my application of that quote to the SSTV link in question is that I don’t have enough data to determine whether the Nexus SSTV signal was present or not. I don’t have that data apparently due to a hardware, software, or QRM fault. If it’s a terrestrial problem it shouldn’t be used to assess the overall good/bad rate for that particular link.
Now, I laid all that out not to sound argumentative, but to explain how I arrived at my determination, so you can scrutinize it. I’m probably coming off like a jerk here and I don’t mean to at all, I’m grateful to you for your input and I’m here to learn from people with more experience.
Well, let’s break it down by parts:
Is the transmitter alive?
CW (or carrier) can be seen for the first portion of the observation, and is perfectly straight. This indicates that what is being observed is moving at exactly the expected doppler rate of change, and thus is very good evidence that the transmitter is indeed alive
Is the transmitter transmitting in the listed frequency?
It’s right down the center off the waterfall (or very close to it), so it is indeed trasmitting at the listed frequency.
Are the TLE’s accurate?
Again, the observe CW and/or carrier are almost perfectly vertical. This indicates that the TLEs are correct.
So we’ve met all three requirements (and honestly, I don’t think that the TLEs being correct are necessary, and there are PLENTY of SatNOGS observations of newly launched or decaying satellites where the TLES are quite obviously not correct, yet they are still vetted as “good” because it is clear that the satellite in question was observed.
As for as not seeing SSTV (or maybe seeing a little in the second half of the observations), I think it doesn’t make sense to have different “transmitters” for satellites that are on the same frequency. For example, the FOX FM sats have two “transmitters” listed, one for DUV telemetry and one for FM voice, but in actuality they are the same exact transmitter. Yes, DUV can be turned off (as we saw during the final days of Fox-1A, but the transmitter is still the same.
What I always do is treat “good” as “satillite is observed”. I treat “bad” as “satellite is not observed”. I treat “failed” as “the waterfall is blank or missing, or absolute garbage.”
I’ve just changed the rating to good, @K3RLD in the post above describes it very well on how vetting should be currently done.
However there is an open ongoing discussion for changing the vetting as there are several issues with the current way. Very recently the discussion has been re-started among developers as we have realized that it affects in a bad way other parts of the network and we need to re-think the whole idea of vetting.
Anyway until we have something solid, we are going to follow what we have done until now, which as I said is described very well on @K3RLD post. In any case, if you have a doubt on what the vetting should be feel free to open a thread here for community to help.
Unfortunately I don’t have something to add on the main discussion of this thread about the gain drop.
@fredy, Roy, thank you for you input. We would have been in full agreement from the start if the database tracked the success rate strictly on a per-satellite basis, rather than breaking down further into distinct modes hosted on each bird. As it is I think we’re highlighting a partial ambiguity in the guidelines. My take on it is that I want to provide useful feedback on all points- TLE accuracy, platform activity, and lastly whether the specified waveform shows up. I care about the last bit because it affects the sat’s good/bad rate for that particular mode, which people are certainly using as a guideline when we schedule observations. Roy, I take your input as ex cathedra and will grade my observations accordingly in the future.
You are right, and vetting as good an observation that is scheduled for mode A because there is signal of mode B is confusing. This is why we need to re-define what vetting means and also to move forward with other changes that have been discussed in the past, like generating gnuradio scripts dynamically based on (trying to demodulate) multiple modes.