My two currently active stations have been failing observations at a much higher rate than the historical trend (~never apart from my own mistakes).
#3150 is the SatNOGS Kit demo hardware from Hamvention. We re-imaged the uSD card, mostly as a student training opportunity, but the hardware was otherwise reassembled to as-received condition.
I’m leaving troubleshooting notes and things I’ve collected and seen, partly because I don’t have much dedicated time to focus on root-cause discovery, and partly to give search hits for others with similar behavior.
Info
Both stations run the satnogs-auto-scheduler with the majority same priorities file. This is to intentionally schedule obs of the same transmitter on co-located systems to make for comparisons.
#834 is a Pi4 driving a G-5500 rotator and a set of VHF+UHF Yagis. Yes, there is indeed no preamp between the antenna and the 60m of LMR-600 run to the diplexer. Some day…
#3150 has its Pi4 on the roof with PoE for network and power.
Obs 8247156 failure
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: satnogsclient.scheduler.tasks - INFO - Spawning observer worker.
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: satnogsclient.observer.observer - INFO - Start rotctrl thread.
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: apscheduler.executors.default - ERROR - Job "spawn_observer (trigger: date[2023-09-29 04:17:13 UTC], next run at: 2023-09-29 04:17:13 UTC)" raised an exception
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: Traceback (most recent call last):
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/apscheduler/executors/base.py", line 125, in run_job
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: retval = job.func(*job.args, **job.kwargs)
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/satnogsclient/scheduler/tasks.py", line 64, in spawn_observer
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: observer.observe()
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/satnogsclient/observer/observer.py", line 243, in observe
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: self.run_rot()
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/satnogsclient/observer/observer.py", line 334, in run_rot
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: self.tracker_rot.trackobject(self.location, self.tle)
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/satnogsclient/observer/worker.py", line 184, in trackobject
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: self._midpoint = WorkerTrack.find_midpoint(self.observer_dict, self.satellite_dict,
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/satnogsclient/observer/worker.py", line 150, in find_midpoint
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: timestamp_max = pytz.utc.localize(ephem.Date(observer.next_pass(satellite)[2]).datetime())
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: File "/var/lib/satnogs/lib/python3.9/site-packages/ephem/__init__.py", line 534, in next_pass
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: result = _libastro._next_pass(self, body)
Sep 28 23:17:13 satnogs-1 satnogs-client[432]: ValueError: that satellite appears to be circumpolar and so will never cross the horizon
.... later ....
Sep 28 23:56:06 satnogs-1 satnogs-client[432]: satnogsclient.scheduler.tasks - ERROR - Observer job lock acquiring timed out.
Correlation?
Looking back at the obs history, maybe there is a pattern on #834 of GOES-17 being the first failure. After that the station fails everything until I reboot the Pi.
maybe?
… still looking for other correlations for the first failure of a series.