Satnogsclient.scheduler.tasks - ERROR - An error occurred trying to GET observation jobs from network

I am wondering why i get sometimes for an indefinite period these errors shown below. It then repeats every minute.
Is there something wrong configured ? I have 1.8.1 running and overvations are successful. I am only curious about all this errors, could someone give me more information about it?

`Mar 03 12:52:59 raspberrypi satnogs-client[427]: satnogsclient.scheduler.tasks - ERROR - An error occurred trying to GET observation jobs from network
Mar 03 12:52:59 raspberrypi satnogs-client[427]: Traceback (most recent call last):
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/connectionpool.py", line 449, in _make_request
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     six.raise_from(e, None)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "<string>", line 3, in raise_from
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/connectionpool.py", line 444, in _make_request
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     httplib_response = conn.getresponse()
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/usr/lib/python3.9/http/client.py", line 1347, in getresponse
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     response.begin()
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/usr/lib/python3.9/http/client.py", line 307, in begin
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     version, status, reason = self._read_status()
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/usr/lib/python3.9/http/client.py", line 268, in _read_status
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/usr/lib/python3.9/socket.py", line 704, in readinto
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     return self._sock.recv_into(b)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/usr/lib/python3.9/ssl.py", line 1241, in recv_into
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     return self.read(nbytes, buffer)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/usr/lib/python3.9/ssl.py", line 1099, in read
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     return self._sslobj.read(len, buffer)
Mar 03 12:52:59 raspberrypi satnogs-client[427]: socket.timeout: The read operation timed out
Mar 03 12:52:59 raspberrypi satnogs-client[427]: During handling of the above exception, another exception occurred:
Mar 03 12:52:59 raspberrypi satnogs-client[427]: Traceback (most recent call last):
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/requests/adapters.py", line 489, in send
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     resp = conn.urlopen(
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/connectionpool.py", line 787, in urlopen
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     retries = retries.increment(
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/util/retry.py", line 550, in increment
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     raise six.reraise(type(error), error, _stacktrace)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/packages/six.py", line 770, in reraise
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     raise value
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/connectionpool.py", line 703, in urlopen
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     httplib_response = self._make_request(
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/connectionpool.py", line 451, in _make_request
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     raise ReadTimeoutError(
Mar 03 12:52:59 raspberrypi satnogs-client[427]: urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='network.satnogs.org', port=443): Read timed out. (read>
Mar 03 12:52:59 raspberrypi satnogs-client[427]: During handling of the above exception, another exception occurred:
Mar 03 12:52:59 raspberrypi satnogs-client[427]: Traceback (most recent call last):
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/satnogsclient/scheduler/tasks.py", line 191, in get_jobs
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     response = requests.get(url,
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/requests/api.py", line 73, in get
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     return request("get", url, params=params, **kwargs)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/requests/api.py", line 59, in request
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     return session.request(method=method, url=url, **kwargs)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/requests/sessions.py", line 587, in request
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     resp = self.send(prep, **send_kwargs)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/requests/sessions.py", line 701, in send
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     r = adapter.send(request, **kwargs)
Mar 03 12:52:59 raspberrypi satnogs-client[427]:   File "/var/lib/satnogs/lib/python3.9/site-packages/requests/adapters.py", line 578, in send
Mar 03 12:52:59 raspberrypi satnogs-client[427]:     raise ReadTimeout(e, request=request)
Mar 03 12:52:59 raspberrypi satnogs-client[427]: requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='network.satnogs.org', port=443): Read timed out. (read tim>
`

It usually coincides with congestion on network, https://status.libre.space/

Hi Daniel, thanks for sharing this status page.
Anyway, good to know that everything is ok from my side.

The F6FKQ-UHF station is newly installed from the docker packages. Everything sounds good until the first observation moment.

Here after an extract of the log stating at PI3 launching, incluing the programmation of one observation and crashing just at the beginning of the latter.

quote

INFO satnogsclient Starting status listener thread…
INFO satnogsclient.scheduler.tasks Registering get_jobs periodic task (60 sec. interval)
INFO satnogsclient.scheduler.tasks Registering post_data periodic task (180 sec. interval)
INFO satnogsclient Configuration valid, satnogs-client started successfully.
INFO satnogsclient.scheduler.tasks Received job for observation 12729895, starting at 2025-11-12T21:35:42+00:00
INFO satnogsclient.scheduler.tasks Drop planned observation 12729895 (reason: deleted in network).
ERROR satnogsclient.scheduler.tasks HTTPSConnectionPool(host=‘network.satnogs.org’, port=443): Max retries exceeded with url: /api/jobs/?ground_station=2659&lat=48.4661&lon=1.448&alt=180 (Caused by NewConnectionError(‘<urllib3.connection.HTTPSConnection object at 0x7fa0732640>: Failed to establish a new connection: [Errno -2] Name or service not known’))
ERROR satnogsclient.scheduler.tasks Fetching jobs from network failed.

unquote

To be noted the VHF station based on the same setup gives identical bad results.

I spent a considerable amount of time reading the docs and making tests without any success so far.

Looking at the router connection page, It can be seen that the connection between the Pi3 and network.satnogs.org launched at the beginning of the observation remains always in “time wait” status, shut down after a short while and at the end transmit an error.

DNS works well and does not look to be an issue.

Any clue where I have to dig in my setup ?

73, F6FKQ

1 Like

try change the configuration for log, from info to debug mode:

#SATNOGS_LOG_LEVEL=INFO
SATNOGS_LOG_LEVEL=DEBUG

and then restart the docker

that will give more detail

Hello Bali, thank you for your support

Here after the setup report

------------[ copy here ]------------
{
“versions”: {
“satnogs-client”: “unknown”,
“satnogs-ansible”: “unknown”,
“satnogs-flowgraphs”: “unknown”,
“gr-satnogs”: “unknown”,
“gr-soapy”: “unknown”,
“gnuradio”: “unknown”,
“satnogs-config”: “1.0”
},
“state”: {
“is-applied”: false,
“pending-tags”: null
},
“system”: {
“date”: “2025-11-12T14:03:00.453660+00:00”,
“platform”: {
“system”: “Linux”,
“node”: “satnogs-config”,
“release”: “6.12.47+rpt-rpi-v8”,
“version”: “#1 SMP PREEMPT Debian 1:6.12.47-1+rpt1~bookworm (2025-09-16)”,
“machine”: “aarch64”,
“processor”: “”
},
“memory”: {
“total”: 950181888,
“available”: 621543424,
“percent”: 34.6,
“used”: 257683456,
“free”: 283582464,
“active”: 478187520,
“inactive”: 70729728,
“buffers”: 30416896,
“cached”: 378499072,
“shared”: 4694016,
“slab”: 50376704
},
“disk”: {
“total”: 30837207040,
“used”: 7287562240,
“free”: 21963780096,
“percent”: 24.9
}
},
“configuration”: {
“satnogs_antenna”: “RX”,
“satnogs_api_token”: “[redacted]”,
“satnogs_log_level”: “DEBUG”,
“satnogs_rf_gain”: “29.7”,
“satnogs_rx_samp_rate”: “2.048e6”,
“satnogs_scheduler_log_level”: “DEBUG”,
“satnogs_soapy_rx_device”: “driver=rtlsdr”,
“satnogs_station_elev”: “180”,
“satnogs_station_id”: “2659”,
“satnogs_station_lat”: “48.4661”,
“satnogs_station_lon”: “1.4480”
}
}
------------[ copy end ]-------------

Log_level is set to DEBUG. Curiously, I do not get more information than with INFO.

INFO satnogsclient Starting status listener thread…
INFO satnogsclient.scheduler.tasks Registering get_jobs periodic task (60 sec. interval)
INFO satnogsclient.scheduler.tasks Registering post_data periodic task (180 sec. interval)
INFO satnogsclient Configuration valid, satnogs-client started successfully.
INFO satnogsclient.scheduler.tasks Received job for observation 12734893, starting at 2025-11-13T21:51:42+00:00
INFO satnogsclient.scheduler.tasks Drop planned observation 12734893 (reason: deleted in network).

Did I miss something ?

73, F6FKQ

try running this command:

sudo docker ps

and

sudo docker images

and check live the error log with

sudo docker logs -f satnogs_satnogs-client

and check connection to satnogs server:
sudo docker exec -ti satnogs_satnogs-client curl -v 'https://network.satnogs.org/api/jobs/?ground_station=2659&lat=48.4661&lon=1.448&alt=180'

Again thank you for your support

Result of “sudo docker ps” is

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4178410685df librespace/satnogs-client:1.9.3 “satnogs-client” 2 weeks ago Up 10 hours satnogs_satnogs-client
5834d880541f librespace/hamlib:4.0 “/docker-entrypoint.…” 2 weeks ago Up 10 hours satnogs_rigctld

result of “sudo docker images” is

REPOSITORY TAG IMAGE ID CREATED SIZE
librespace/satnogs-client 1.9.3 a9d440e74731 10 months ago 2.26GB
librespace/satnogs-config 1.0 abe1eae13b54 10 months ago 1.08GB
librespace/ansible 9.9.0 e6fc71a7b3dc 14 months ago 572MB
librespace/hamlib 4.0 300643543cd9 21 months ago 129MB

result of connection checking is

  • Trying 94.130.162.100:443…
  • Connected to network.satnogs.org (94.130.162.100) port 443 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/certs/ca-certificates.crt
  • CApath: /etc/ssl/certs
  • TLSv1.3 (OUT), TLS handshake, Client hello (1):
  • TLSv1.3 (IN), TLS handshake, Server hello (2):
  • TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
  • TLSv1.3 (IN), TLS handshake, Certificate (11):
  • TLSv1.3 (IN), TLS handshake, CERT verify (15):
  • TLSv1.3 (IN), TLS handshake, Finished (20):
  • TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.3 (OUT), TLS handshake, Finished (20):
  • SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
  • ALPN, server accepted to use h2
  • Server certificate:
  • subject: CN=network.satnogs.org
  • start date: Nov 3 03:02:52 2025 GMT
  • expire date: Feb 1 03:02:51 2026 GMT
  • subjectAltName: host “network.satnogs.org” matched cert’s “network.satnogs.org”
  • issuer: C=US; O=Let’s Encrypt; CN=R12
  • SSL certificate verify ok.
  • Using HTTP2, server supports multi-use
  • Connection state changed (HTTP/2 confirmed)
  • Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
  • Using Stream ID: 1 (easy handle 0x55850bf060)

GET /api/jobs/?ground_station=2659&lat=48.4661&lon=1.448&alt=180 HTTP/2
Host: network.satnogs.org
user-agent: curl/7.74.0
accept: /

  • TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
  • TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
  • old SSL session ID is stale, removing
  • Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
    < HTTP/2 200
    < server: nginx
    < date: Fri, 14 Nov 2025 07:01:11 GMT
    < content-type: application/json
    < content-length: 2
    < vary: Accept
    < allow: GET, HEAD, OPTIONS
    < content-security-policy: img-src ‘self’ https://.gravatar.com https://.mapbox.com https://.satnogs.org https://.wasabisys.com https://.google-analytics.com data: blob: https://db-satnogs.freetls.fastly.net https://network-satnogs.freetls.fastly.net; script-src ‘self’ https://.google-analytics.com ‘unsafe-eval’ https://network-satnogs.freetls.fastly.net; default-src ‘self’ https://.mapbox.com https://archive.org https://.archive.org https://*.wasabisys.com https://network-satnogs.freetls.fastly.net; child-src blob:; style-src ‘self’ ‘unsafe-inline’ https://network-satnogs.freetls.fastly.net; frame-src blob:; worker-src blob:
    < strict-transport-security: max-age=31536000; includeSubDomains
    < x-content-type-options: nosniff
    < referrer-policy: same-origin
    < cross-origin-opener-policy: same-origin
    < x-frame-options: DENY
    < x-download-options: noopen
    < x-permitted-cross-domain-policies: none
    < x-xss-protection: 1; mode=block
    <
  • Connection #0 to host network.satnogs.org left intact

As far as I can read, everything looks correct.

73, F6FKQ

net connection is good. satno version is good. try run this:

sudo docker exec -ti satnogs_satnogs-client SoapySDRUtil --find="driver=rtlsdr"

sudo docker exec -ti satnogs_satnogs-client rtl_test
sudo docker exec -ti satnogs_satnogs-client id

######################################################

Soapy SDR – the SDR abstraction library

######################################################

Found Rafael Micro R828D tuner
RTL-SDR Blog V4 Detected
Found device 0
driver = rtlsdr
label = Generic RTL2832U OEM :: 00000001
manufacturer = RTLSDRBlog
product = Blog V4
serial = 00000001
tuner = Rafael Micro R828D

sudo docker exec -ti satnogs_satnogs-client rtl_test

Found 1 device(s):
0: RTLSDRBlog, Blog V4, SN: 00000001

Using device 0: Generic RTL2832U OEM
Found Rafael Micro R828D tuner
RTL-SDR Blog V4 Detected
Supported gain values (29): 0.0 0.9 1.4 2.7 3.7 7.7 8.7 12.5 14.4 15.7 16.6 19.7 20.7 22.9 25.4 28.0 29.7 32.8 33.8 36.4 37.2 38.6 40.2 42.1 43.4 43.9 44.5 48.0 49.6
Sampling at 2048000 S/s.

Info: This tool will continuously read from the device, and report if
samples get lost. If you observe no further output, everything is fine.

Reading samples in async mode…
Allocating 15 zero-copy buffers
lost at least 24 bytes

User cancel, exiting…
Samples per million lost (minimum): 0

sudo docker exec -ti satnogs_satnogs-client id

uid=999(satnogs-client) gid=999(satnogs-client) groups=999(satnogs-client)

Again, on “satnogs_satnogs-client” side, everything looks correct

73, F6FKQ

Sorry to come back on this topic but I have still a blockage situation with my two ground stations. Both were rebuilt from scratch with dockerized package.

One station worked properly a couple of days, the other never. Presently they are both showing the same issue here below :

DEBUG urllib3.connectionpool https://network.satnogs.org:443 “GET /api/jobs/?ground_station=4397&lat=48.4661&lon=1.448&alt=180 HTTP/1.1” 200 814
DEBUG satnogsclient.scheduler.tasks Fetched jobs from network, received 2 future observations.
INFO satnogsclient.scheduler.tasks Drop planned observation 12819503 (reason: deleted in network).
INFO apscheduler.scheduler Removed job 12819503

To be noted the regular fetching runs without issue

INFO apscheduler.executors.default Running job “get_jobs (trigger: interval[0:01:00], next run at: 2025-11-27 16:28:06 UTC)” (scheduled at 2025-11-27 16:27:06.067344+00:00)
DEBUG satnogsclient.scheduler.tasks Fetching jobs from network…
DEBUG urllib3.connectionpool Starting new HTTPS connection (1): network.satnogs.org:443
DEBUG urllib3.connectionpool https://network.satnogs.org:443 “GET /api/jobs/?ground_station=4397&lat=48.4661&lon=1.448&alt=180 HTTP/1.1” 200 814
DEBUG satnogsclient.scheduler.tasks Fetched jobs from network, received 2 future observations.
INFO apscheduler.executors.default Job “get_jobs (trigger: interval[0:01:00], next run at: 2025-11-27 16:28:06 UTC)” executed successfully

Any help is welcome, 73 de F6FKQ

1 Like

That’s strange, the observations wasn’t deleted in Network. Can you check the time locally in the RPi, if it is right? you can check it with the date command, check also if the timezone is in UTC.

Additionally I would like to suggest you provide us with more logs, ideally when an observation is running (if it is), and if you see any error in the logs.

Also would be useful if you can share the support report to do a quick check in the set settings. To generate it got to the menu item Advanced -> Support in sudo satnogs-setup.

Fredy, thank you for the reminder about the time. both ntpsec were badly configured and both RPi were not at the right time. After due correction the two RPi work properly now and first observations were fully completed.

Again thank you for the support. 73 de F6FKQ

2 Likes

Awesome, when I find some time I’m going to add it in the troubleshooting guide in the wiki for future reference.

1 Like