Meteor MN2 decoder for rpi 3B+

Hi,

True, I also updated the satnogs_lrpt_demod.py script to always dump the iq file, but to my understanding this can also be configured in satnogs_setup.

The sat_id is extracted from the TLE data. I used a TLE data argument because that is available als a post observation script argument out of the box, no changes necessary. Instead of the full TLE you can also use the --sat_id argument to pass the sat id, which is easier when you start the script by hand.

I currently have been able to produce Meteor M2 2 images, so the complete setup seems to be working now for both Meteors. It turns out the devel branch of meteor_demod has some improvements to get a more reliable lock, one of which is the possibility to specify a frequency delta -d, which is the distance from the center frequency that the demodulator will look for a carrier. Since the observations are doppler compensated and the satellite’s frequency is quite stable this delta can be set quite tight, I have set it at 1000 Hz which works nicely for me. I have updated the script on my repo with this change.

First working observation. Has a lot of black bars, but hey, it’s something.
https://network.satnogs.org/observations/1301416/#tab-data

@Rico I see you managed to get it working!
I probably forgot to remove the -F argument, which I implemented on a fork of meteor_demod because sometimes the frequency wandered off and failed to get a lock. This is now on the devel branch as -c and -d. (I set mine to 1000 as well, but a tighter delta might work better, the signals from the 2 satellites appear pretty stable at around +200Hz for me).

I found that I get the best results with a PLL bandwidth (-b) of around 75 for M N2 (40069) and around 250-350 for M N2-2 (44387). Reducing the RRC filter order -f didn’t affect the demodulation too much for me and speeds the process up a bit on the Pi, but might depend on the RFI at your location.

I switched to use medet for both satellites as well, it’s a bit quicker and meteor_decode sometimes produced images with misaligned channel.

Here’s my current process_meteor.py (ignore the ntfy stuff, it’s simply a push notification whenever a Meteor-M observation succeeds/fails)

Remember that with medet you need to update your script every time they change channels/AP IDs if you want all 3 channels. I try to keep https://github.com/benelsen/weather_satellites updated, but usually only after an or two observation return blank images.

Hi @benelsen, thanks for your response. Ok, so our findings correspond. The max freq delta of 1 kHz helps a lot I’m also seeing better results with a loop bandwidth of 300, and lowering the RRC filter order does not seem to matter much, so I’m keeping it at 24:

Images:
BW 100, RRC 64

BW 300 RRC 64

BW 300 RRC 24

Anyone willing to experiment can try to use my process_meteor.py, see a few posts up for instructions, I will make something to auto-install on an existing satnogs setup later.

That already looks a lot better.

btw. the different PLL bandwidth is due to


Maybe the lower bandwidth is necessary for observations that aren’t doppler-corrected?

Could be, I don’t know. I read a little bit about the subject: A higher bandwidth leads to a faster lock, but also a faster loss of lock. In some applications the bandwidth is high to begin with and is lowered upon locking. Just dividing the bandwidth by a magic number for a slightly different modulation mode seems a bit silly to me.

Anyway, the current parameters seem to work fine judging by the image I just received:

1 Like

I now have a problem where an observation is messed up if it starts while the post-observation script of a previous observation is still running. @benelsen, I see you used nice and ionice, was that in an attempt to remedy this issue? And did that fix it for you?

I try not to schedule Meteor passes too low to so to reduce the possibility, but judging from the observations that are within 20 minutes it seems to help.
I also only dump the I/Q for Meteor (onto an external SSD) which helps with the following observation running into disk IO bottlenecks. CPU isn’t the biggest problem with only 1 core at 100% during the demod+decode.

I also only dump meteor iq-files, on a thumb drive in my case (which will probably break at some point). It is true that the demod/decode takes only 1 core, but the gnu radio scripts can use more than one core and may need all four of them, so may still be bothered if one core is already fully occupied. Ah well, it’s no problem if you leave some breathing room between observations. :slight_smile:

I figured out what the issue was (see below) and fixed my post observation script.

Bottom line, observations are only cleaned up from tmpfs after the post observation script finishes, so it is wise to make post-ob’s run short and fork/daemonize if you want to do any longer running processing. My post observation script now forks if it has to do meteor processing.

1 Like

Ok, no need to update your station for Meteor M2-2 any more, since the sat has broken down. :expressionless:

hey @Rico, I use a bias-t post observation script:

/home/pi/rtl_biast/build/src/rtl_biast -b 0

Is there any way I can add your process_meteor.py script to it along the existing script so I can use both?

Thanks!

Hi,

Yes sure, be sure that the arguments are passed to the python script then.

So as observation script you would set:
/path/to/your/script.sh --id {{ID}} --tle {{TLE}}

And to your script you would add the following line:
/path/to/process_meteor.py "$1" "$2" "$3" "$4"

This passes the first four arguments of your post ob script to the python script. Mind the quotes, they are necessary since the TLE contains spaces.

Let me know if you have any troubles.

Thanks for your response! Can I kindly ask you to clarify the about observation script part? Do I set the following as post-observation script:

/path/to/your/script.sh --id {{ID}} --tle {{TLE}}

Is that correct? In that case, where do I put the current bias-t section?

/home/pi/rtl_biast/build/src/rtl_biast -b 0

Thank you very much!

Your post observation script would be /path/to/your/script.sh and the contents would be

#!/bin/bash 

/home/pi/rtl_biast/build/src/rtl_biast -b 0
/path/to/process_meteor.py "$1" "$2" "$3" "$4"

The post-observation script setting should be as follows:
/path/to/your/script.sh --id {{ID}} --tle {{TLE}}

1 Like

Perfect, now I understand! I’ll follow up with the results.

Thank you!

1 Like

I guess knowing the TLE is less important now, given Meteor M2-2 is no longer transmitting LRPT…

An update, it didn’t work :slight_smile: For some reason, when I add the script as post observation (or meteor script alone) the station stops uploading data and no waterfalls are produced. I initially thought the script had to be executable but that didn’t help either.

Am I missing something here, what other steps should I do in order to get the script working? I promise I went through the whole thread but it’s a long thread with even outdated info in some places so I would really appreciate a nudge into right direction.

Thank you!

@ivor: There might be a few reasons why the script does not work in your setup. In my case, I changed the first line in process_meteor.py to get it to work:

#!/usr/bin/env python3

I hope it helps.

Mine is already commented out and the script seems to run if I run it manually. Thanks though!