Hi,
Regarding Observation 2459970
Does it look like more like a bug or an USB connection problem with my RTL-SDR, or a coax problem ?
-Yohan
Hi,
Regarding Observation 2459970
Does it look like more like a bug or an USB connection problem with my RTL-SDR, or a coax problem ?
-Yohan
There is ongoing discussion of this in another thread. Apparently it’s a bug that affects the tuner, and basically it switches back and forth between the current frequency and the frequency used in the last observation. I believe it’s being worked on (or at least researched).
Is this the same issue? Take Observation 2434434 mentioned in Observation 2412799: METEOR-M 2 (40069). There’s a distinct shift at about 700 seconds where the central line shifts left substantially. You can also see a fair bit of regular back and forth around the 200 second mark. Whereas with these observations they exhibit the zebra lines but the signal stays central. Are the ‘zebra lines’ part of the same issue, an artefact of it, or an independent problem?
I also have some observations that seem to have the frequency shift problem on my station.
https://network.satnogs.org/observations/2396158/
Not sure. I suggested that at first glance, but you may be correct.
In the interim, if this behavior is caused by the same issue as the others with this symptom, doing a full power-off shutdown and coming up fresh seems to clear things up.
Still not sure if it’s the same issue, but I think I can cause this ‘on demand’ by trying to be too clever.
In trying to run radiosonde_auto_rx when my station wasn’t doing anything else, I had a pre-observation script:
#!/bin/bash
sudo systemctl stop auto_rx.service
/home/pi/rtl_biast/build/src/rtl_biast -b 1
Sometimes I got a usb_claim_interface error -6
from rtl_biast
when it hadn’t quite managed to shut down auto_rx
in time. That generally killed the observaton as it either didn’t work at all, or at least got no signal with the LNA not being powered. Other times it appears to work, but seems to initiate the ‘zebra’ effect in the waterfall image (see observation 2501914).
It could be a fluke of timing but my guess is not. I couldn’t prove either way as it needed both auto-rx
stopped and satnogs-client
restarted to clear latent issues.
Another process keeping the ‘handle’ is not the same as the same process doing so, but thought it worth documenting this as another data point possibly useful for digging deeper. I’ve seen the same symptoms when not running the above on this station too.
My station has started doing this again - this and this observation being particularly obvious. I’m going to restart the service again, but how do we get to the bottom of this?
Do we know any more about where the problem is occurring? Top thread output from various processes:
top - 16:06:29 up 2:03, 1 user, load average: 0.55, 0.31, 0.22
Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.2 us, 1.4 sy, 0.0 ni, 90.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 1939.4 total, 1436.8 free, 208.3 used, 294.4 buff/cache
MiB Swap: 1939.4 total, 1939.4 free, 0.0 used. 1587.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4412 satnogs 20 0 328436 104768 52088 S 42.1 5.3 1:34.31 satnogs_fm.py
351 satnogs 20 0 312964 99180 18316 S 1.0 5.0 0:27.60 satnogs-client
345 hamlib-+ 20 0 25244 4236 2744 S 0.3 0.2 0:04.72 rigctld
4470 pi 20 0 10300 3072 2560 R 0.3 0.2 0:00.05 top
1 root 20 0 33752 7996 6404 S 0.0 0.4 0:03.69 systemd
top - 16:06:17 up 2:03, 1 user, load average: 0.71, 0.33, 0.22
Threads: 24 total, 0 running, 24 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.2 us, 3.1 sy, 0.0 ni, 87.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 1939.4 total, 1440.2 free, 207.9 used, 291.4 buff/cache
MiB Swap: 1939.4 total, 1939.4 free, 0.0 used. 1590.7 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
351 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:02.02 satnogs-client
576 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.18 satnogs-client
648 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.72 satnogs-client
652 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.55 satnogs-client
653 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.51 satnogs-client
681 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.32 satnogs-client
683 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.27 satnogs-client
684 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.42 satnogs-client
685 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:07.35 satnogs-client
744 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.36 satnogs-client
745 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.39 satnogs-client
798 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.52 satnogs-client
890 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.37 satnogs-client
891 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.41 satnogs-client
951 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.54 satnogs-client
1069 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.13 satnogs-client
1070 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.41 satnogs-client
1117 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.46 satnogs-client
1255 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:01.22 satnogs-client
1256 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.35 satnogs-client
1438 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.27 satnogs-client
1581 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.40 satnogs-client
4409 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:00.04 satnogs-client
4410 satnogs 20 0 312964 99180 18316 S 0.0 5.0 0:01.79 satnogs-client
top - 16:06:07 up 2:02, 1 user, load average: 0.65, 0.31, 0.22
Threads: 16 total, 0 running, 16 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.4 us, 1.9 sy, 0.0 ni, 89.5 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 1939.4 total, 1442.6 free, 208.1 used, 288.7 buff/cache
MiB Swap: 1939.4 total, 1939.4 free, 0.0 used. 1593.1 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4425 satnogs 20 0 328436 104768 52088 S 13.0 5.3 0:25.02 pfb_arb_resamp1
4423 satnogs 20 0 328436 104768 52088 S 8.3 5.3 0:15.21 fir_filter_blk<
4429 satnogs 20 0 328436 104768 52088 S 6.0 5.3 0:11.45 ogg_encoder4
4426 satnogs 20 0 328436 104768 52088 S 4.0 5.3 0:07.72 fir_filter_blk1
4422 satnogs 20 0 328436 104768 52088 S 3.0 5.3 0:05.57 soapy::source1
4424 satnogs 20 0 328436 104768 52088 S 2.3 5.3 0:04.82 coarse_doppler_
4428 satnogs 20 0 328436 104768 52088 S 1.7 5.3 0:03.10 dc_blocker_ff12
4430 satnogs 20 0 328436 104768 52088 S 1.7 5.3 0:02.94 iq_sink5
4431 satnogs 20 0 328436 104768 52088 S 1.7 5.3 0:02.61 waterfall_sink2
4427 satnogs 20 0 328436 104768 52088 S 1.3 5.3 0:02.87 quadrature_dem1
4419 satnogs 20 0 328436 104768 52088 S 0.3 5.3 0:00.71 satnogs_fm.py
4421 satnogs 20 0 328436 104768 52088 S 0.3 5.3 0:00.03 tcp_rigctl_msg_
4432 satnogs 20 0 328436 104768 52088 S 0.3 5.3 0:00.90 satnogs_fm.py
4412 satnogs 20 0 328436 104768 52088 S 0.0 5.3 0:01.53 satnogs_fm.py
4418 satnogs 20 0 328436 104768 52088 S 0.0 5.3 0:00.00 satnogs_fm.py
4433 satnogs 20 0 328436 104768 52088 S 0.0 5.3 0:00.00 satnogs_fm.py
top - 16:06:47 up 2:03, 1 user, load average: 0.63, 0.35, 0.23
Threads: 3 total, 0 running, 3 sleeping, 0 stopped, 0 zombie
%Cpu(s): 10.0 us, 0.0 sy, 0.0 ni, 90.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 1939.4 total, 1432.5 free, 207.9 used, 299.1 buff/cache
MiB Swap: 1939.4 total, 1939.4 free, 0.0 used. 1583.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
345 hamlib-+ 20 0 25244 4236 2744 S 6.7 0.2 0:00.10 rigctld
4411 hamlib-+ 20 0 25244 4236 2744 S 0.0 0.2 0:00.50 rigctld
4420 hamlib-+ 20 0 25244 4236 2744 S 0.0 0.2 0:00.50 rigctld
The problem start to reapear on my station. Seems like this is maybe combined with rtl-sdr (de)connection, I had two observation with data but with no waterfall before this observation.
http://network.satnogs.org/observations/2930952/
there is a left over observation running in parallel that causes this. update to latest and restart your station.