I’ve noticed that not all files are being uploaded to SatNOGS. The /tmp/.satnogs/data/ gets rather full then the client no longer is seen at dev-network, however still shows as online.
For my station, last observation was around 1500z 22/09/2017.
/tmp/.satnogs/data/ contains 12 files - 280Mb
/tmp/.satnogs/data/complete - contains 13 files - 131Mb
Maybe tmpfs needs be adjusted for more space in /tmp/.satnogs/ and less in others?
A few questions:
Why do the files remain and not deleted after upload? (audio & waterfall)
Why are they not all uploaded and remain in /tmp/.satnogs/data/ (data & waterfall)
Is there a scheduled cleanup?
Interestingly, the system made using stretch and following the fedora instructions, does not use tmpfs for /tmp/.satnogs/, however it still has many files sitting in /tmp/.satnogs/data/complete/
Satnogs client itself doesn’t delete completed observations as some users may need them locally, so you need to create a new cron job for it.
Not sure why some uploads fail. In the past this happened due to bad internet connection, you need to check your logs for it. Do you run satnogs client manually or you use one of the automatic solutions(supervisord or systemd)?
EDIT: just realized that we are discussing for the rasbian image… probably the step above is not included. cc @Acinonyx who have worked on the image generation.
Yes, it was the Raspbian image. That step not included.
I believe the images uses systemd.
Some cleanup would be handy and increase /tmp/.satnogs. Did find that can fill rather quick. Some waterfall files have been >50Mb
Interesting how .ogg files virtually always get uploaded, not so for waterfall. The latter helps to adjust gain.
As a trial, I’ve gone back to a manual build, but upgraded to latest gr-satnogs and satnogs-client.
Thanks for your feedback! There is already a pending merge request to clean up after completed jobs.
There can also be leftover data from incomplete jobs. We could either clean this up in the client, via a cron job or leave it as a manual maintenance task.
It is possibly the same issue with the missing cleanup of completed jobs in the client. It could be that the system ran out of memory and started killing processes randomly. In the first case it may be rotctld and in the second GNU Radio.
This issue will be fixed with the release of SatNOGS client version 0.5 hopefully in a few days.
More Information regarding above logs: I have also tried building satnogs-client 0 branch locally, with similar results. Tried Fedora and Rasbian on raspberry pi 3. Regarding cleanup, the logs were on fresh system. I have tried this with multiple observations. I do-not have any rotor so did not set any related settings in the BASIC or ADVANCED settings page for them.
When I looked at the confg page on port 5000, I found SATNOGS_COMPLETE_OUTPUT_PATH was blank and no directory was created for /tmp/.satnogs/data/complete I edited the .env to set the paramater.
I’m not getting any files created in /tmp/.satnogs/data/ and sub-directories. Of course nothing being uploaded to dev-network either.
What would have caused the upgrade from 0.4 to 0.5 to cause this? Do I need to upgrade gr-satnogs as well?
Gone from working system to nothing, except the jobs are still being scheduled and I think still atempting to collect data.
Of course I did the UHF system as well, but 0.3 to 0.5. It broke as well.
*Mental note to self: Only do 1 at a time!!
About SATNOGS_COMPLETE_OUTPUT_PATH, it is changed on v0.5 to be blank as the new default behaviour is after successful uploading to delete files (ogg, waterfall and data) and not moving them into the complete directory.
Edit: The rest seems to be compatibility issues between client and gr-satnogs.
I’ve changed the SATNOGS_COMPLETE_OUTPUT_PATH to blank.
Never knew about the Compatiblity page. Ok, so I need the latest gr-satnogs.
Now, not being a linux guru, how do I make sure I download and install the correct version? How do I check what version I have installed currently?
I tried git clone https://github.com/satnogs/gr-satnogs.git and recompile, but still not getting any data. Where is the data stored while and after processing. Couldn’t find ogg or waterfall files anywhere?
Sample of the log just after a METEOR-M 2 pass:
Oct 04 10:15:20 satnogs-vhf bash[700]: 2017-10-04 10:15:20,846 - satnogsclient - DEBUG - Sending message: F 137897423
Oct 04 10:15:20 satnogs-vhf bash[700]: 2017-10-04 10:15:20,849 - satnogsclient - DEBUG - Received message: RPRT 0
Oct 04 10:15:20 satnogs-vhf bash[700]: 2017-10-04 10:15:20,973 - satnogsclient - DEBUG - Sending message: F 137897422
Oct 04 10:15:20 satnogs-vhf bash[700]: 2017-10-04 10:15:20,978 - satnogsclient - DEBUG - Received message: RPRT 0
Oct 04 10:15:21 satnogs-vhf bash[700]: Exception in thread Thread-101:
Oct 04 10:15:21 satnogs-vhf bash[700]: Traceback (most recent call last):
Oct 04 10:15:21 satnogs-vhf bash[700]: File “/usr/lib/python2.7/threading.py”, line 801, in __bootstrap_inner
Oct 04 10:15:21 satnogs-vhf bash[700]: self.run()
Oct 04 10:15:21 satnogs-vhf bash[700]: File “/usr/lib/python2.7/threading.py”, line 754, in run
Oct 04 10:15:21 satnogs-vhf bash[700]: self.__target(*self.__args, **self.__kwargs)
Oct 04 10:15:21 satnogs-vhf bash[700]: File “/usr/local/lib/python2.7/dist-packages/satnogsclient/observer/worker.py”, line 127, in _communicate_tracking_info
Oct 04 10:15:21 satnogs-vhf bash[700]: self.check_observation_end_reached()
Oct 04 10:15:21 satnogs-vhf bash[700]: File “/usr/local/lib/python2.7/dist-packages/satnogsclient/observer/worker.py”, line 175, in check_observation_end_reached
Oct 04 10:15:21 satnogs-vhf bash[700]: self.trackstop()
Oct 04 10:15:21 satnogs-vhf bash[700]: File “/usr/local/lib/python2.7/dist-packages/satnogsclient/observer/worker.py”, line 168, in trackstop
Oct 04 10:15:21 satnogs-vhf bash[700]: os.killpg(os.getpgid(self._gnu_proc.pid), signal.SIGKILL)
Oct 04 10:15:21 satnogs-vhf bash[700]: OSError: [Errno 3] No such process
Oct 04 10:15:33 satnogs-vhf bash[700]: 2017-10-04 10:15:33,989 - apscheduler.executors.default - INFO - Running job “get_jobs (trigger: interval[0:01:00], next run at: 2017-10-04 02:16:33 UTC)” (scheduled at 2017-10-04 02:15:33.964267+00:00)
Oct 04 10:15:36 satnogs-vhf bash[700]: 2017-10-04 10:15:36,019 - satnogsclient - DEBUG - Opening TCP socket: 127.0.0.1:5011
Hi @Zathras – I’m new to SatNOGS, but since you mentioned having the Raspbian image up above:
You can check which version of gr-satnogs is present by running dpkg -l gr-satnogs:
pi@raspberrypi:~/.ssh $ dpkg -l gr-satnogs
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=====================================-=======================-=======================-===============================================================================
ii gr-satnogs 20171001.ae1e331-1 armhf SatNOGS GNU Radio Out-Of-Tree Module
Here, you can see that my version is “20171001.ae1e331-1” – meaning it was built October 1 2017, and it’s about a week old (looking for “ae1e331” on that page, you can see I’m a couple of updates behind.)
I’d assume that re-running the sudo satnogs-setup -n script is intended to take care of updating components.
@Acinonyx, have I got those last two points right? Is there an intended way to fetch/apply the latest versions of packages (perhaps just sudo satnogs-setup without the -n argument)?
Nope, that did not work for me. I’m not using the SatNOGS Raspbian image, just what created myself. The instructions no longer exist though Which is a pity because I did refer to them all the time.
I had several issues with the SatNOGS Raspbian image, which is why I went back to my version.