Rico
December 17, 2019, 12:29pm
123
I figured out what the issue was (see below) and fixed my post observation script.
Did you configure any long-running post observation scripts? This could trigger the issue as follows, this was the case on my station:
Observation 1 ends, long running post observation script starts
Observation 2 starts while post-ob from 1 is still running
Obs 2 is written to tmpfs, but Obs 1 is not cleaned up yet
tmpfs is filled up
jobs database on tmpfs cannot be written to any more
Obs 2 is not stopped because jobs database cannot be accessed, and will run indefinitely,
Obs 3 is started, b…
Bottom line, observations are only cleaned up from tmpfs after the post observation script finishes, so it is wise to make post-ob’s run short and fork/daemonize if you want to do any longer running processing. My post observation script now forks if it has to do meteor processing.
#!/usr/bin/env python3
#
# Meteor Decoder Processor
# Initial version for Meteor M2:
# Mark Jessop <vk5qi@rfhead.net> 2017-09-01
# This version:
# Rico van Genugten @PA3RVG 2019-11-19
#
# This script picks up IQ recordings from wherever a satnogs flowgraph
# puts them, processes them and places output images in the satnogs recorded
# data directory, where satnogs-client will pick them up and upload them to the
# corresponding observation. The uses meteor_demod, medet and the convert tool
# supplied by imagemagick, which should be installed on your system.
# You can find them here:
#
# meteor_demod: https://github.com/dbdexter-dev/meteor_demod
# medet: https://github.com/artlav/meteor_decoder
# convert: apt install imagemagick
#
# You can use any satnogs flowgraph that produces IQ data with a wide enough
This file has been truncated. show original
1 Like