Integrating gr-satellites into SatNOGS


As the author of gr-satellites, this is something that I have been thinking to do for a long time but never got to it. Now that SatNOGS observations are becoming really popular and useful, it would be great to have demodulated data automagically appear for all the satellites that gr-satellites supports.

I am opening this thread to think about the best way of integrating gr-satellites into the SatNOGS client software. I have assumed that these forums are the best place to have the discussion. Feel free to move the conversation to another medium if you see fit.

I am not very familiar with the SatNOGS software, but I’ve been taking a look at the flowgraphs in gr-satnogs to see how things are done. It seems that the key to getting this right is in the interfaces. A good interface design will make it easy to plug in decoders from gr-satellites or other projects into the SatNOGS system.

Let me briefly describe how the interfaces in gr-satellites currently work. There is a flowgraph for each satellite. This was a decission that I took when the project started, with the goal of making it possible to fine tune decoding parameters (loop bandwiths, etc) for each particular satellite. Currently I am not so sure if this is the best way to do things, since there are several satellites which have very similar flowgraphs, since they use the same radio. Nevertheless, the list of different modes to support is also large. With the exception of a few standard modes (mainly AX.25, and the GOMspace radios, which are quite popular), many satellites implements custom protocols, especially regarding framing and FEC. As a side effect, this makes it easy to distinguish between different satellites when sending telemetry to the SatNOGS server, since each flowgraph has the NORAD id of the satellite.

Each gr-satellites flowgraph has as input a UDP source. The idea of this source is that the user could stream samples in real-time from gqrx, a flowgraph in gr-frontends, or another source. The UDP source expects real samples at 48kHz. The format depends on the particular flowgraph, but it is always one of the following three:

  • FM. FM-demodulated audio, as you would use to receive FSK.
  • SSB. USB-demodulated audio, with the signal centred at 1.5kHz.
  • Wide SSB. USB-demodulated audio, with the signal centred at 12kHz.

The design goal was to allow the user to use a conventional radio in FM or SSB mode (note that SSB mode usually has a bandwidth of 3kHz) as long as possible (some modes, especially 9k6 BPSK, need SSB but don’t fit in 3kHz). This also explains the distinction between SSB and wide SSB. We use SSB instead of wide SSB whenever it’s possible.

The output of the flowgraph is essentially text on the console. Depending on the satellite, there is either a telemetry parser that prints out the telemetry in human readable format (it would also be a nice touch to show this information in SatNOGS observations), or just a hex dump of the packet. Most telemetry parsers use construct, so the parsed telemetry is also machine readable and can be passed around, pickled, etc.

There is also a telemetry submitter that sends packets to any SiDS compatible server (currently the SatNOGS server is used).

In some cases there are other kinds of output. Some AX.25 flowgraphs support a TCP KISS socket (which is kind of standard for a TNC), and you can also save the packets to a file in KISS format.

That’s more or less all regarding gr-satellites interfaces.

What I have seen in gr-satnogs is that the input is really an osmocom source. However, there is some DSP done until we arrive to the OGG Encoder. At this point we have something very similar to what gr-satellites expects in its UDP input.

The output are PDUs that go into a frame file sink and UDP message sink. This fits very well with gr-satellites. We have PDUs with the packets at some point in the gr-satellites flowgraph.

Now some ideas about how to get this going.

Off the top of my head I am thinking about deciding some good interfaces and encapsulating each gr-satellites decoder into a hierarchical flowgraph with these interfaces. In this way, it will be easy to drop in the hierarchical flowgraph into a gr-satnogs flowgraph. I don’t think is a good idea to make a different hierarchical flowgraph per satellite. Probably it is better to do it per modem (understood as a set of modulation, coding, framing and FEC settings).

I still want to maintain the current scheme for the gr-satellites flowgraphs, since people are already using them in this way, and they are also useful for processing recordings, and even sending to the SatNOGS server the telemetry from recordings. The idea of doing hierarchical flowgraphs also fits nicely into this scheme, since a large part of each flowgraph will get replaced by the hierarchical flowgraph. This also saves some repetition between satellites using the same modem.

I think this long post is all for now. Please contribute with any ideas you may have. Careful planning will probably save us a lot of work.

Adding an EQUiSat Decoder to gr-satnogs

Just bumping this as I’m curious what people think of this suggestion.


Hey @EA4GPZ !

I will try to address some of the points/suggestions you are raising but will look into the technical input from @Acinonyx @surligas and @fredy @cshields for the architecture pieces.

This is the right place to discuss this, thanks for bringing this forward here!

Early on and on key points of SatNOGS development we have always planned for plugability and modularity. For that reason we have always assumed that gr-satnogs might not be the only “radio” part of our technology stack and specifically satnogs-client is re-architected (and should be re-architected more) to accommodate such options.

Although currently our input is gr-osmosdr blocks, we will be moving away from it soon to a Soapy SDR based approach (we have a project underway to develop Soapy SDR gnuradio blocks. That said I find great value on having an agnostic UDP based approach for the IQ input on any of our radio solutions.

This seems like a nice way forward. One thing to keep in mind is that within our architecture the flow of data looks like this currently:

The most important and relevant point here is the differentiation between demodulated (raw hex) and decoded (structured/readable) data. In the SatNOGS workflow the decoding happens on SatNOGS DB (see other posts for more details). My understanding is that gr-satellites produces both (?). The only case where we decode to something directly in gr-satnogs is APT transmissions (and soon HRPT too) but there is a pending dicussion there if we should decode on gr-satnogs or follow the data flow architecture and decode on DB.

On the point of one flowgraph per satellite, we have always been questioning our approach too (flowgraphs specific to modulations). We believe that the latest modular abstractions that gr-satnogs 2.0 brings verify that we should stay away from satellite specific flowgraphs and instead bring any satellite specific information (FECs, framings etc) as arguments through trasmitter information. (thus extending our transmitter model in DB to fit all needs).

Once again, I would like to convey the excitement of the whole SatNOGS team for the chance to align and de-duplicate efforts with you @EA4GPZ and express our gratitude for all the great work you have been doing with gr-satellites so far.

@surligas @Acinonyx @fredy @cshields and others please do chime in with ideas on how to push this forward.


Thanks for the architecture flow graph, that helps @pierros

Just to throw out some more context here: we’re currently collecting about 50k frames/day in satnogs-db from SiDS sources across 150 contributors (no way of telling how many are from gr-satellites vs @DK3WN’s tlm forwarder, as we don’t report/track that through the protocol)… But I throw it out there to point out that this pipeline is a critical piece of the puzzle right now, and why the “WIP” satnogs-dashboard that pierros linked to is such a priority.

satnogs-network is pulling in about 1.5k frames/day with about 50 stations in prod today (and another 23 testing in prod, and 35 in dev).

I point this out to say that we should definitely continue the manually generated pipeline that gr-satellites provides today, if that is feasible. That said, there is a lot of functionality and modes in in gr-satellites that we do not support today and we have a growing number of stations that could benefit from this collaboration!:

One other area of collaboration, I notice that @EA4GPZ has a lot of telemetry decoders already in his repo. With a little modification and relaxing the desire for kaitai format, we could plug these into the new decoding pipeline easily.

Glad to see this happening!!



@pierros @surligas tell us more!! :slight_smile:


gr-satellites produces both demodulated and decoded data. Demodulated data is produced in all the satellites/flowgraphs and uploaded to SatNOGS DB using SiDS, so looking at your architechture diagram, I think that this output can be integrated as a “gr-satnogs demodulated data” without any problems.

Decoded data is produce for some of the satellites (maybe 30 or 50%). It is done using pyconstruct to parse the binary data, so the output is a pyconstruct container. I will take a look at the kaitai format mentioned by @cshields to see how this could fit.

One concern is that the structure of the decoded data is very satellite-specific. Basically, every satellite implements their own fields. I’m curious about how you’re handling this. Is it possible to view some example of decoded data within SatNOGS or is still WIP? An example withing gr-satellites can be seen here (look at the “Container” near the bottom of the post). Every field of the container is addressable as a Python structure or dict, but of course many times one should already know the names of the fields to do something useful with them.

One thing that I see problematic is that many satellites have really nothing similar in their decoders, so trying to support everything witha parametric scheme can be difficult. For a real example from gr-satellites, let’s take QB50 A03 Pegasus and QB50 CA03 ExAlta-1.

Both use an FSK mode. In the case of Pegasus it is always 9k6, while ExAlta-1 can use either 4k8 or 9k6. However, the only similarity between the decoders is the clock recovery and bit slicing (FM demodulation is already done outside of the flowgraph, perhaps by an analog radio). The flowgraphs share a lowpass filter -> clock recovery MM -> binary slicer. The rest is different. Actually this chain of 3 blocks is something that appears any time you want to receive FSK, so that could (and probably should) be a generic block (and the parameter can be the baudrate, etc.).

The rest of the Pegasus flowgraph consist in extracting 64 byte packets using a syncword, then running a (64,48) RS decoder based on the rscode library, then checking some non-standard CRC-16. No other satellite I’ve seen uses this kind of thing.

ExAlta-1 uses the GOMspace NanoCom AX100 in RS mode, so the rest of its flowgraph could and should be made into a generic “AX100 decoder” block. It consists of the following: A descrambler for the G3RUH polynomial, extraction of 256 byte packets using a syncword, a custom block called “NanoCom AX100 decode” which reads the packet length from the packet header and then performs CCSDS descrambling and CCSDS RS decoding (using libfec). Another important thing to keep in mind is that the ExAlta-1 decoder actually runs 2 decoders in parallel, one for 4k8 and another for 9k6, since we don’t know beforehand which mode that satellite will use.

So I don’t really see how you could support these two satellites by entering some parameters into the satellite description in the SatNOGS DB and then having a generic decoder that can do every possible modem by reading those parameters. Here even the RS library used is different due lack of support for both of them in a single library.

Of course there are many satellites which follow more or less regular/standard schemes: AX.25 (where the only parameters are FSK/AFSK/BPSK, whether scrambling is used, and the baudrate), GOMspace radios, perhaps those based on the CCSDS stack. The rest of the satellites (which are not so many, but for me are the most interesting) have had their comms stack designed in a completely ad-hoc manner, so I don’t see a way to support them other than making an ad-hoc decoder.

As I final remark for this kind of reasoning, I find it quite difficult to describe precisely the modem used by some satellite using a few phrases/words (in such a way that someone reading this description could build a decoder for the modem). See the list of satellites in the gr-satellites readme for my try at this.

As I said, I intend to continue maintaining gr-satellites as standalone decoders, since it can be useful in many situations where the SatNOGS software wouldn’t be so adecquate.

The large source of contributions to satnogs-db probably come from UZ7HO’s SoundModem (through @DK3WN’s forwarder). This is unfortunately closed source and windows only software, but it works very well, it is quite popular within the Amateur Radio community, and supports more or less the same number of satellites that gr-satellites. Probably it is also much more user friendly. However, I think that gr-satellites is a very good alternative to UZ7HO’s software, especially for those wanting some of the following: open source, embedded or automated (no GUI), looking under the hood to learn, adapting the software, etc.


First of all, let me say that this is an excellent idea and an opportunity for both projects to improve.

In SatNOGS, we gave much emphasis on modular architecture since the start. This led to some design decisions that may not make much sense at first glance but are there to fulfill modular design principles. That being said, I will try to explain why the above data flow was selected and how network and storage restrictions affected it.

In a perfect world with unlimited bandwidth and storage, a super-thin SatNOGS Client would only be responsible for controlling the radio and publishing I/Q data to SatNOGS Network. SatNOGS Network would then demodulate and decode the data. Unfortunately, this is not currently feasible since every observation can produce several hundreds of MiB of I/Q. Thus, a compromise was basically made to overcome this restriction: I/Q data is demodulated and decoded (or even re-encoded!) down to the first level that it becomes acceptable to transmit and store over commercial internet connections. For example, I/Q is demodulated down to an AX.25 frame but satellite telemetry is not further decoded since the data is sufficiently small to transmit over the internet. In addition, we try to minimize decoding responsibilities from the clients since they are usually very low on resources which must be available as soon as possible for the next observation. Moving most decoding responsibilities centrally also helps with deployment of new (or maintenance of existing) decoders since no software update on the clients is required. Of course the boundaries on how deep to decode on the client is a little blurred, especially in formats that do not have a very clear OSI layer separation or modulation is somehow dependent or changing on transmitted data. But, AFAIK, these are generally exceptional cases.

In general, there is much confusion on what demodulation and what decoding is and to which level it is referring to. For instance, AFSK requires two levels of demodulation. To avoid confusion, we came up with our own terminology. For SatNOGS:

  • Demodulation is everything including any decoding that produces a stream or a frame of binary data. It is the last level of non-human readable data. It is also described as Mode in Network and Cilent.
  • Decoding is referring only to decoding of demodulated data to human readable format.

For example: In SatNOGS terms, the process of producing an AX.25 frame from I/Q is called Demodulation while the process of producing telemetry information from an AX.25 frame is called Decoding. Decoding is optional. There are cases (e.g. APT) in which demodulated data are already human readable.

Let me say a few things about the future client architecture. There has been a preliminary study on how to standardize SatNOGS Client interface. Radio process will be separated from the client completely and will have its own control interface. We call this SatNOGS Radio. Two interfaces have been defined:

  • A control port: This is a TCP port which controls the Radio. It is used to send commands to the radio ranging from controlling the frequency to selecting the demodulation method (a.k.a. Mode). This will basically be a new radio in hamlib with many extended capabilities to cover all satellite modes.
  • A set of data ports: These will be UDP ports to which demodulated data will be sent. Since there may be multiple concurrent streams of data (e.g. frames, audio, waterfall, etc), multiple ports will probably be used.

In case of gr-satnogs and gr-satellites, which are a set of flowgraph scripts with no native support for the control interface, a plugin system will allow wrappers to be loaded in SatNOGS Radio which will handle the commands and bring up and down the correct flowgraphs.

What matters to you, is that the filesystem based interface will be dropped and the data will be sent to the client via a UDP socket. Also, all the mapping from modes to flowgraphs and their parameters will be defined in SatNOGS Radio plugin of gr-satellites.

Indeed, there is a problem on how to model and describe the subcases of various transmitter modes. We have been trying to contain this problem by not storing modulation parameters in network. A single ‘Mode’ field describes the whole modulation and is mapped in the client to either a specific set of parameters for a common script or a whole individual script with no modulation specific parameters.


Following the idea of trying to do as much as possible in the server, but having bandwidth constraints between client and server, have you considered the following scheme?

The client performs all DSP until bit-slicing. This means that the client would perfom filtering, clock recovery and demodulation (FSK, BPSK, etc.) and then obtain the bitstream. The result of this process is a stream of bits at whatever the baudrate (say, 9600baud). Packet boundary detection is not done, so we have bits even when no packet is transmitted.

Then it is the job of the server to perform packet boundary detection, all FEC decoding, CRC checking, etc.

The good thing about this scheme is that what the client does can be described in a very generic way. For most (perhaps all) the satellites I can think of this part can be described by the modulation (FSK/BPSK/AFSK), baudrate, and in the case of AFSK, the tone frequencies. All the complicate details and custom protocols are left to the server, where, as you say, it is easier to update the software.

The downside is that we need to transmit more data to the server than if we perform packet boundary detection and only send valid packets. However, at tipical baudrates of (9600baud, 1200baud, or even 19200baud), the bitstream is much smaller than the OGG recording that is already sent to the server, so I guess this wouldn’t impact much the total bandwidth used by the client.


Revisiting this thread to keep the discussion going in light of the Artifacts proposal.

@EA4GPZ I think your proposal makes sense and can be well accommodated within the new Artifacts scheme. Would you agree?
@surligas & @Acinonyx can you provide some input on the last post by @EA4GPZ ?

A clarification for your proposal @EA4GPZ would be that we can go both ways. Have the demodulators up to bitstream on the client but also provide plugin options for framing as it will be needed in cases of TC&C (done locally on the client). Either case the important point for now is to validate the Artifacts proposal so we can make sure it is futureproof.

On a more general note: It is sad to see duplication between two open source projects with common goals and vision. I am sure we can find a way to de-duplicate and move forward stronger together.


I think that incorporating the ideas that appeared in this thread into the Artifacts scheme is the way to go.

An important question is what should be done client-side and what should be done server-side. I agree with your ideas. Clients should always provide a Bitstream object to the server for further processing, but might also have some deframers for particular situations (TC&C as you mention is one, but perhaps we also want to support client-side deframing for really popular protocols such as standard AX.25).

The way that I see gr-satellites fit best into this scheme is in the server-side. It would provide a “function” that takes a Bitstream Artifact and produces Frame/Packet Artifacts by detecting packet boundaries, doing FEC decoding, checking CRC, etc. This “function” could be run in the server as a call-back when new Bitstreams are submitted, or in batch processing, essentially making sure that the database is always populated with the Frame Artifacts corresponding to the Bitstream Artifacts that have been submitted.


Hi there,
I would like to dedicate my bachelor thesis to some parts of this topic - asking the question “How to incorporate new satellite decoders/demodulators into satnogs?”

Being a student at TU Berlin I would like to use the BEESAT/TECHNOSAT protocol as an example. As far as I know @EA4GPZ’s gr-satelites also has demodulation/decoding for these satellites. If there is sufficient time I would also consider s-net as a second example.

It would be great if the things I do would be of any use to upstream satnogs - that is why I’m reaching out and trying to find out what current roadmap is.

I’ve already created a simple flowgraph based on the example-flowgraph in gr-satnogs and beesat-sdr for the mobitex nx protocols used by the BEESATs. And added some code to the if-statement in the satnogs-client to use it to decode the protocol. (Based on the mode from satnogs DB) While this approach does work I would like to have something like a plugin based system.

I like the idea by @EA4GPZ to have the bitstream or audio generated by the client and then the frame decoding by the server and the decoding of the data into any useful means as a second step done by the server. (As far as I understand I thought this second step would be done by some kaitai scripts?)

That sounds exactly like the thing I would like to have - is that already implement in the gr-satnogs master branch? Or where could I find code or a more detailed description for this?

I’m also looking into the idea of aritfacts. It might be a good way to get some more metadata in the satnogs DB and choose appropriate decoding/demodulation based on this data. (for example also concerning this issue:

Even though @EA4GPZ says that there is not so much code reuse between satellites I think it makes sense to describe the data in three layers:

  • “physical” decoding the receied data into frames or audio (modulation - GMSK in case of TUBSATs)
  • “data” that is generating frames from the bitstream (FEC, etc… - mobitex nx in case of TUBSATs)
  • “application” decoding frames into chunks of meaningful data like bettery power etc. - done by kaitai and presented in grafana?

We also like to do a little hackathon in January to get things going and introduce more TU Berlin students to satnogs and it’s concepts. If the concept of artifacts is somewhat more rigid at that point it might also be interesting to submit the telemetry data collected by the TU Berlin to Satnogs (which is stored at a local database at TU).

I hope this post is not too confusing.



I think that this is a key remark and it is good to orient ourselves instead of getting lost in a sea of different possibilities.

Speaking in terms of the Artifacts we are proposing, I would write it in the following form:

RF -> (physical layer decoder) -> Bitstream -> (data layer decoding) -> Frame -> (application layer decoding) -> ???

Here RF, Bitstream, Frame and ??? represent particular instances of Artifacts that we would like to store in the SatNOGS DB. The job of the different decoders is to take an Artifact from one kind and generate the corresponding of Artifact along the chain.

RF stands for either IQ or audio recording Artifact, or it may even represent an SDR hardware on a SatNOGS client (so physical layer decoders may run on the client to submit Bitstream Artifacts to the database).

Bitstream is what I’ve been proposing above.

For Frame I don’t have a precise definition, but I can usually tell the best approach for each satellite/modem. It’s what we are currently storing in the SatNOGS DB as packets. It involves decoding all FEC, and checking all CRC, but not merging different packets (for instance, there are satellites where the Payload of the frames is a KISS stream, and a telemetry “unit” might be fragmented into several frames).

???, the output of the application layer decoder is more broad, since most satellites use ad-hoc protocols. Typical examples however are timestamped telemetry channels (voltage, temperature, current, etc.), and images.

PS: The BEESAT decoder in gr-satellites uses the NX Decoder block from beesat-sdr, which really does all the hard job (it’s the data layer decoder, in this case). For S-NET I wrote a decoder from scratch.


You can see the difference, there is a version field. If I am correct the gr-sids is using it own version and different from the telemetry forwarder. This would make it possible to distinguish how the data was forwarded.