@PE0SAT, you forgot to mention that Occam’s Razor applies when “other things being equal…”. Regardless, of whether we coordinate on SiDS, we need to start building a telemetry data model that can hold much more information than what we can get from current SiDS spec. This will solve many of the issues that are mentioned on this thread.
Regardless, of whether we coordinate on SiDS.
For me this discussion has nothing to do with the frame transport, so why explicitly make this remark?
This will solve many of the issues that are mentioned on this thread.
Based on my experience and the reason why the current transport is a “kiss” (keep it s …etc) solution, I now see a proposed path that will make things more complicated and will also have even more dependencies with parties involved.
I really hope that I miss something in the explanation so far and that this will make things at the end less complicated and easy to implement by all involved.
But let me ask a simple question, this topic is about frame format standardization, does the SatNOGS team even want to standardize to a common frame format?
I’ve already explained why format standardization is related to SiDS and how the transport imposes restrictions on the format. If there was a way that the transport could carry the information whether a packet includes a CRC or not, there would be no reason to enforce stripping or keeping the CRC on decoders. Each decoder or user would be able to select whether a CRC should be included. I deliberately try to avoid to suggesting in detail how this will be expressed in the model because I think that it should be a collaborative effort (repository, MRs, reviews, discussions, etc.)
Again, I’m proposing a generic solution to the problem at hand and a path to move forward in a systematic way by creating a telemetry data model in which we all agree. I believe that current SiDS protocol is underspeced for this case, as I explained earlier. Whether it’s SiDS v2, HDF5 or something else in insignificant at this point. Nevertheless, we have already been experimenting with HDF5 and waterfall artifacts with great success.
The problem with your specific example “If there was a way that the transport could carry the information whether a packet includes a CRC or not” is that the decoder application often does not know this, unless the developer guessed and checked what each satellite is doing and hardcoded it in the application. The satellite knows what it’s transmitting, sure, but it isn’t explicitly indicated in the packet, and the decoder application often has a limited view of what the satellite is transmitting.
As an example, in the past in gr-satellites I tried to check the CRC-32 of all the CSP satellites. There is even a bit in the CSP header that indicates whether the CRC-32 is present in a frame or not, so this sounded as an easy job. Well, it isn’t: some satellites include the CSP header in the CRC calculation, others don’t; some satellites encode the CRC as big-endian, others as little-endian; others don’t have the CRC flag in the CSP header correctly set. So I needed to check what each satellite did and implement special rules. Eventually I stopped trying to check the CRC-32 in the CSP frames, as it was too much work to support it.
As I explained earlier, it should be possible to transport information on how the packet was decoded. Otherwise, all decoders are restricted to decode the same way.
Why? In the case of CRC-32 which you describe, the developers of different decoders might not agree with our decision of not checking the CRC for all CSP packets, and may prefer to optionally enable and disable CRC checking per satellite. The opposite hypothetical case of agreeing to always check CRC is even worse; it would prevent some decoders from submitting packets due to possible unimplemented CRC checking. In this case, making CRC checking optional and indicating somehow whether it was checked or not is the solution.
One could argue that whether the CRC should be checked or not is up to the decoder. Wrong! If we need data which have an adequate level of consistency for statistics, graphing and maybe scientific value we should know if a CRC was checked or not. A long-term goal of SatNOGS DB is to analyze the data and uncover patterns and trends which are not visible right now and will benefit both satellite and ground station operators.
How do you see this in practice, and don’t forget that most of the data send to the database is from non dedicated satnogs systems, but from enthusiastic radio operators who have a Windows system as there main OS?
We need to approach the problem systematically. First step is to develop the telemetry data model. During this process it will become clear which information is useful, what to keep and what to drop, what is optional and what is not. This must be a collaborative effort.
I won’t jump into how this model will eventually be used in practice because apparently it has caused confusion.
Clear,
But in the meantime, I would like to see a cooperation to work with standardized frame formats so we can support the current way of working and thereby produce data that is consistent for all involved.
This could also be a path where we learn what would be feasible at the end to incorporate into the data model you want to design.
Then the question is, what would be then current starting point we will use?
One starting point on what is expected in a frame in SatNOGS DB could be the Kaitai Structs in satnogs-decoders. For the content of a frame to end up in the dashboard it must adhere to the format specified by the corresponding Kaitai Struct. E.g. the above mentioned standardization of AAUSAT-4 data resulted in satnogs-decoders@33ec059.
Does this answer led into the direction you had in mind @PE0SAT?
@Acinonyx
I personally do not understand why the development of a “telemetry data model” would help with the need for the standardization of frames. In my understanding the “experimenting with HDF5 and waterfall artifacts” was motivated by the need of additional data to be stored alongside the frames (e.g. “SNR”, “mean frequency deviation for this frame”, “which standard/standard version does the frame adhere to?” etc…). Thus for me the development and adoption of a “telemetry data model” is/was orthogonal to the “standardization of frame formats”.
example:
With artifacts the AAUSAT-4 frames could be enhanced by a metadata field specifying which frame format they adhere to. But the frame format still must be standardized, even if there are multiple formats standardized (“No length and HMAC”, “Length and HMAC”, “Only HMAC” in this specific case, see aforementioned 33ec059).
I think the cooperation between all developers regarding the AAUSAT-4 decoding is good example in a way to standardize. Lets see if we can do this upfront when new satellites are developed and finally launched.
Then there is only one single Kaitai structure that can use frames from all different transport methodes as input.
But this will also give the teams the same frame structure from all stations that support SatNOGS.
The data model is the way to express standardization. Then, we can use the model to create generators, parsers, transport formats, etc. The multiple standardize format that you mention comes from a data model which says that a packet can have either length or not, HMAC or not. You are actually modelling the data right now by even describing the options.
@Acinonyx I am more of a practical person when discussing, that is both a good and bad trait – I know. But I have a hard time to understand how to practically implement your vision of the data model such that we can move forward with “prototyping” rather than just dream.
I am not trying to pessimistic here, me as just a community member trying to use satnogs and contribute here and there, but how should this model discussion/development be done?
I mean, the “schema” issue you have mentioned before have been open for a year. Link as earlier for completeness. https://gitlab.com/librespacefoundation/satnogs/satnogs-db/-/issues/317
As I understand it that “schema” is intended to be a more through description of what we currently call “transmitters” in the db. Which currently is mostly an RF modulation form, baud and frequency. Is this correct? If so, then this is not the same as the HDF5 work mentioned, but orthogonal to it – as @kerel epresses it, right?
I would assume this should also make the transmiter selection in the network frontend more generic, such that I can just select, decode the GMSK stuff from AAUSAT4 and the gnuradio flowgraph has the opportunity to auto detect baudrate, modindex etc. and add that as meta information to the observation via the possible HDF5 magic? This, instead of me having to decide on baud rate when scheduling the observation.
That seems a reasonable starting point for understanding what kind of data is in the frames of each particular satellite. However, I would rather have an open discussion between all stakeholders that ends up with some agreement (ideally set in written documentation) rather than rely on “every software should be compatible with implementation X” (in this case SatNOGS DB).
Additionally, Kaitai Structs from SatNOGS are great as a tool to describe and understand the telemetry format of those satellites for which we know the format. For many satellites we have no clue about the different fields that are in the telemetry frames. The best we can describe them is to say that (for example), they are AX100 frames with the ASM+Golay protcol, a size of N bytes, and a CSP header, but then we know nothing about the contents of the payload.
Yes, I fully agree with this and it is the exact reason why I wrote “one starting point” and could.
Thus I also do see the need for written documentation about the frame standard(s) expected in SatNOGS DB for each satellite.
The data model is the way to express standardization.
But I do not understand either how (or where) this data model will be implemented. I know that one part of this development is tracked in the “SatNOGS artifact format” milestone. But this does not include any form of data model development.
Eventually I stopped trying to check the CRC-32 in the CSP frames, as it was too much work to support it.
I guess it is more more related to a poor protocol standardisation. And it can be solved only by CSP 2.0, which should explicitly define how CRC should be calculated, &etc.
One starting point on what is expected in a frame in SatNOGS DB could be the Kaitai Structs in satnogs-decoders.
Unfortunately kaitai structures won’t validate the data. I.e. CRC might be part of the payload, but will it be correct? Or byte order in CSP header. Kaitai will define the rules how to parse these bytes, but if these bytes are in reverse order, then it will accept such frames.
What we actually discussing here is something that is similar to https://wiki.wireshark.org/Development/LibpcapFileFormat. This format can be produced by tcpdump tool. And this is a very standard way for capturing frames over the wire. Radio channel and satellite communications are no different from them. However HDLC/TCP/IP protocols are more mature:
- Ethernet frames and upper protocols are standardised. There is only 1 way to calculate IP header or CRC of a packet.
- Hardware is very well-known. In contrast satellite can implement transport layer with errors or using it’s own protocol.
We can’t make CSP protocol more consistent. So even if we define the CSP packet format for database, there will be always a satellite who implements it in its own way. So as I suggested above, we should capture data-link layer frames as they come and let client applications decode them. This would be consistent with current network capture tools and designs.
Another important topic, which was not discussed in this thread - client applications. We’re trying to figure out the standard format for what? Improve lives of a client decoders? Or scientists and researchers? Satellite operators? Enthusiasts? All of them require completely different format. For example (IMHO):
- scientists and researchers need data in JSON or CSV format. They deal with scientific payload rather than internal protocols and transport layers.
- satellite operators need data in raw format. Just byte arrays. They already have their own decoders and can deal with that.
- enthusiasts need data in AX.25 or KISS. Most of TNC and radio amateur tooling is dealing with these formats.
I took exactly this approach in r2server. It can output raw data suitable for jradio and parsed json suitable for humans:
https://r2server.ru/api/v1/telemetry/43596
Coming back to this topic, I’ve opened an issue in gr-satellites addressing point A. Unless someone gives a reason against, in some weeks I’ll remove the CSP header swap. It would be good to check with Andy UZ7HO to see what is he doing.
So as I suggested above, we should capture data-link layer frames as they come and let client applications decode them.
Bravo! That was exactly my point.
I must have missed your latest reply though…
Imagine: if there is yet another CRC (or whatever data integrity check) implemented inside the CSP payload - how many onion rings will the modem unpack until it serves the data to the consumer (in “our” case: satnogs-decoders
)?
I’ve been in contact with Andy and he told me that he dropped the swap already in early 2021.
Edit: See version 0.09b
, http://uz7.ho.ua/modem_beta/other-versions.zip
Great to see that this subject is still being used to standardise the different decoders.