Open Data for Space Situational Awareness - A way forward

Hello all,

It should be no surprise to most of you that Libre Space has been thinking about SSA, open data and the role of our projects in those for long now.

We believe that it is time we publish our ideas around the need of openness in SSA data (orbits, operational reports etc) and we materialize this through a set of proposed developments on our side.

Our full article on the topic can be found here:

Please use this thread for comments, opinions, feedback and join us in our effort to make SSA data Open!


Hi Pierros,

that’s an extensive publication on the topic! :ok_hand:Thank you :pray:
I will need some time to process it. But in the meantime - what activities do you plan that need support? Is there some place to find info about that?



That was a very welcome article - we do need to wrest the “Satellite Catalog” away from governments. But we need to keep authoritarian governments from having veto power over what is shown - there is not a clear path ahead.

I did leave a comment on the article and look forward to future discussion - but look forward far more to future action.


The article, indeed, is quite good. Specially, I liked the analysis of the current status quo regarding TLEs. The fact that the original license currently forbids further distribution by third parties as well as the analysis tasks that can be carried out with those TLEs, is a huge motivation to take this step, and make it just open. The undertaking to be taken is big, quite complex, so, it is a long, long term enterprise. The first steps improving the current data collection by timestamps, should be a good start to see what sort of results can be achieved. And then, see as hardware and software improves, if the required functionality can be deployed. I look forward to support this action.


This may be jumping down into the details a bit soon for this effort, but the kickoffs are always a good time to establish broad patterns of behavior that become natural actions for when you get down into implementing the details.

When it comes to the goal of transparency for entire project, consider extending that philosophy down into the full pipeline of how a TLE “comes to be”, not just the governance of the bigger effort.

In other words, full transparency into the pedigree or lineage of the TLE itself.

In some manner, keeping a full, traceable birth-line stretching from measurements/observations, through the various processes used to exploit those measurements (code used, time it was done, server farm used, etc…) in fitting to an OD solution, all the way to the “birth” of a unique time-stamped ELSET that is created for a particular vehicle.

Specifically I’m thinking how such a transparent/traceable process can help pin down issues that will naturally occur in the process.
ex. Misstagging, perhaps traceable all the way back to a specific observation, conducted by a specific GS-client, etc… or to an operator or process that’s involved in the processing.
Similarly, large residuals in the OD fit that are traceable back to an instance of a particular flawed version in the OD “code”, that was pushed/published by accident, or simply to a “poorly performing” client station.
Just two bookend-cases or examples where having the line of pedigree for a single humble TLE would help in troubleshooting what might be root case in the event of a problem.

In a way, you could think of it as similar to how the source code for the SATNOGS project is managed right now. Each pull-request for code changes are fully traceable, back to point & time of origin (who, when, where, why), should there be a discovered flaw in the code later on.
Except that instead of source code, we’re doing this git-like process with observations, the processing steps, the publishing of the results, etc…
In fact you could take that analogy a bit further still, and treat a TLE (or the process used to make one) like a miniature form of “source code”…

  • don’t like the way the “official system” derived that particular TLE ? Maybe because you believe your filter or smoother settings in your own OD code are “better”… Fork it ! (or branch it it that makes more sense)
    I can easily envision a system where someone “checks out” a TLE from a git-repo, but chooses to follow the @{username} branch (because he’s doing some interesting experiments), and eventually settles on using the #{hash-blah-hash} commit for that TLE (and branch) because it’s giving good results…
    Having watched the recent efforts of a few OD “wizards” looking to produce their own OD fits for the recently released crop of cubesats from ISS, it really does feel like a bunch of clever guys working on their own “forks” or “branches” for a particular TLE fit for a particular vehicle. If I had been in the shoes of the various groups who were scrambling to find any TLE that helped them acquire their bird a little bit better, I would have wanted to “subscribe” to the @fredy “branch” of TLEs (in the Phoenix TLE sub-repo) for some time, and then maybe later switched to the “master” branch as things settled down.

There have been suggestions on other websites for how block-chain-like technology (which git “kinda” uses, in a way) could be applied to uniquely identifying the data, the processing pathways (transactions w/ data), and subsequent dissemination of results (aka produced new data), in a way that provides full transparent visibility to how it all happened.

Right now, as noticed by many, the processing and production of TLE’s behind government, or bureaucratic doors doesn’t allow for full understanding for why certain TLE’s never worked, or worked rather poorly.
You never get to see into the TLE “sausage factory”, but you naturally suspect problems that might occur within.

  • Was it a clerical error (mistagging due to human-in-the-loop errors)?
  • One based on poor quality, or volume of observations ?
  • or a fundamental flaw in the data pipeline (files getting stepped on) ?

You simply don’t know, and any solution proposed for this effort, with overarching transparency as it’s goal, should take extra care in exploring what transparency means to all those steps.

Like I said, it’s a bit of a nit-pick detail when you get down to it, but this is a good time to take a high-level goal (transparency) and start treating it as as actionable guidelines for the processes that will be fleshed out. As stuff gets implemented, keep asking the right questions:

  • “is this transparent enough ?”,
  • “are we making this step traceable in a data pipeline pedigree ?”,
  • “could we reverse our way back to root cause, or point of origin ?” etc…

It’s just something I think that is often implemented as an afterthought, once the problems start piling up, and that’s always harder to fix.

Kind of like “security”… if you don’t bake it into everything you do up front, you’re going to find it much harder to go back and secure things afterwards… and speaking of which, THAT topic is worth discussing a bit as well.



This is a very good comment, when I was a professional orbital analyst we did have poorer performers who updated an orbit (with a differential correction) and updated the catalog - and we had to go back and fix the TLE later. Satellites would be flagged as “lost” when their epoch got a day or two old. Today the 18th SCS releases their TLEs for events like the release of several small sats and T.S. Kelso and others then go off and try to correct the official satellite catalog.

We also have to worry about various users trying to update objects with incorrect orbits - if a satellite was going to reenter and the originating government was worried about liability, might they file an updated orbit that changed the “international designator” or COSPAR id?

But as we do with repositories - a user could check out a TLE and have their own version, but then have some controls before it was merged back into the master.


1 Like