Defintion of NOAA APT frames


Ever since we started collecting telemetry frames, both through SatNOGS observations and through SIDS submissions, I have been wondering how we could define a “telemetry frame” for NOAA APT observations. Currently, we are uploading decoded images in PNG format and this is a good beginning. However, a more formal definition for an APT frame is IMO needed, in particular if we want to accept NOAA APT data through SIDS submissions.

Revisiting the APT specification I realized that it does actually define a complete APT frame as 128 lines:

So, a complete APT frame consists of 128 x 2080 samples (words), where each sample is 8 bits. The number of lines / frame could be discussed, but 128 sounds like a reasonable value.

Clearly, this would increase the amount of data submitted compared to PNG files, on the other hand, it would support interpretation of the embedded telemetry data without the need to parse PNG files.

What do you think?



The concept of satnogs that the received and decoded data is observation- and station-agnostic (does this concept exist?) and thus resides in satnogs-db is broken by the current architecture of APT image storage, which is only done as part of the corresponding observation in satnogs-network (this bothered me quite some time, but I didn’t see any solution to this up to now).
By defining an APT frame format and pushing the data to satnogs-db in this format, like it’s already done for other bands/protocols, this architectural “issue” is resolved.

In addition, it’s possible to easily improve the visualization of the data (most importantly the images itself) in satnogs-db, by only rendering the actual image data to png, and providing the telemetry data in a raw format (which will be done anyhow by providing the APT frames itself), or maybe even as part of a future advanced visualisation. ⇒ “Those b&w stripes along the edges will be gone”.

Thus, I really like your proposal!


I’ve just remembered that we had a discussion about this in the IRC/matrix channel the other day.

As I mentioned back then the APT specification does not explicitly state where a frame starts or ends.

If we want be able to easily parse the data in the telemetry blocks at the side it would be useful to start a frame at the one called “WEDGE #1” and end it 128 lines later after the “Channel ID Wedge”.
This is also the way I would interpret the frame format given by Figure 4.2.2-1, which you also included in your post.

To determine the start or end of a frame, I would suggest to look out for wedge 8 and 9.
Since Section 4.2.2 “APT Transmission Characteristics” states that wedge 8 has 87% (±5%) subcarrier modulation, which is the maximum modulation allowed, it should always be the brightest wedge.
Whereas wedge 9 is the zero modulation reference and should therefore be the darkest wedge.
Hence finding the largest bright to dark transition between two adjacent wedges within 128 lines, should somewhat reliably give us the middle of the frame.
So splitting the image 64 lines above and below that point should result in single frame images.

It give the time for each frame, it should also be possible to puzzle frames from multiple observations together to get one really large image, which will surely look awesome.

Since I can’t sleep I put together a quick proof of concept in python for what I proposed above.

The blue line is the averaged brightness of the 45 wedge for the first channel (pixels 994 to 1039) for each line and orange plot is the difference between this value and the value of the previous line.
As you can see, a nice repeating peak every 128 lines.

So given a decoded image, without to much slanting and a sufficient dynamic range, splitting it into frames should not pose to much of a problem.


A post was split to a new topic: Usability of db data