My station recently locked up, failed all observations.
After some investigation, I found that it was due to Observation 7439976 generating an IQ file of more than 1GB. This filled the entire tmpfs.
Normally, IQ files aren’t much bigger than 100MB. Any idea why this observation generated such a big IQ file?
The second largest observation is Observation 7439018 which generated a 429MB IQ file. After that, the largest file during the last 3 years has been observation 5754434 at 369MB.
Now I need to find a way to disable IQ file generation (or redirect it from tmpfs to local storage) if the baudrate (and the observation time) is too big.
Or maybe I can just increase the size of my tmpfs and let swap work when the files are too big to fit in ram.
My fault, I suggested the 250ksps transmitter to look for the lora signals, it runs fine without the iq but this is probably not acceptable. did you schedule it yourself or via auto-scheduler ?
Perhaps possible to only mess up that one observation if the -pre script can be allowed to remove the iq_dump file if it still is there when obs is starting, this could alleviate the problem of more obs being lost as well.
Yes, I will make my upload script more robust so it removes the file if upload fails. At the moment I have && rm but I should probably do ; rm instead. And a belt and suspenders solution by cleaning up in a pre-script is also a great idea.