Gr-satnogs new waterfall testing

The new waterfall plotting mechanism is almost ready! There are still some improvements that have to been done, but are related with the display and the .png image creation. You can check it out on my fork

How it works:
Instead of the gnuplot it now uses the python matplotlib and generates directly the .png image at the end of the observation.

The C++ block waterfall_heatmap is now responsible to compute the power spectral density and map the resulting energy into int8_t values for every frequency bin of the FFT. These values are passed during runtime to the Python block waterfall_plotter. This block just appends the received values into a 2-D numpy array. For a 7-minute observation, this will take up to few dozens MB of memory. At the end of an observation, before the termination of the flowgraph GNU Radio automatically called the stop() method of each block. The stop() of the waterfall_plotter generates the waterfall image using the stored 2-D array and exits.

What to test:
We have to confirm that this method uses less memory than the gnuplot method during the image creation. I can confirm this for my x86 setup, but I did not tried it on a Raspberry.
If we are ok with the current setup, we can then test higher FFT values for better resolution. Moreover, we can test higher number of rows per seconds. Eventually, we could set it to such resolution that CW signals can visually be decoded, which with the existing gnuplot images is not possible.

1 Like

I can try on raspberry pi 3.
Do I also need to change satnogs-client?
What other system dependency needs to be installed?

Hey @surligas I am trying this out but running by hand no waterfall file is generated at all…

No change to the client, but you’ll need to install libnova-dev libpng-dev libogg-dev libvorbis-dev swig

mkdir build; cd build; cmake -DCMAKE_INSTALL_PREFIX=/usr .. && make -j4 && sudo make install && sudo ldconfig

See also

If you are using the image where it is installed from the package you make want to dpkg -r gr-satnogs first…

Given my experience though I’d wait for surligas to come back with an answer to the missing waterfalls.

@cshields this is strange,

I run --waterfall-file-path=/home/surligas/test.png and then I use Ctrl+C to terminate the flowgraph. After a while, the waterfall is ready.

Does it exits with an error code?

Why png? It takes more disk space than .jpg. Just compared 2 waterfalls from the recent observation:

  • waterfall_42411_2017-12-08T11-59-47.png - 1940Kb
  • waterfall_42411_2017-12-08T11-59-47.jpg - 546Kb (~3.5x less)

We don’t use alpha channel, it’s pretty safe to store them in .jpg.

PNG is lossless and you can afford to lose a few bytes without losing the whole image.

How come? You are transmitting it via HTTP which is based on TCP which has guaranteed delivery…

It has nothing to do with loss of data from the network. jpeg does compression that leads to loss of data from the original image in terms of less bytes.

It’s like audio compression. A song on a lossless format may be 60-100 MB whereas an mp3 is about 3-4 MB. To achieve this compression mp3 cuts off some frequencies. You cannot reconstruct the original audio.

Right. But why do you need lossless waterfall? I thought it was used for debug purposes only.

Why not? 2MB instead of 0.5 is not that much. Also .png is ways easier for possible feature visualization enhancements that the guys from the network may want to apply.

Well for us on limited ADSL plans, that means more quota usage on uploads :frowning:

There are still quota limited ADSL plans!!! :scream: I see your concern, but the audio file for example requires ways more bandwidth. Reducing the observation time by a minute may give you significant less upload files.

Anw, this thread is not the proper one for such discussion.

Given overhead 1940Kb - 546Kb = 1394Kb per observation, total overhead for 42485 observations (from * 1394Kb will be ~ 56Gb.

I was trying to find any design docs or discussions, but haven’t found :frowning:. I have more questions, like “why create waterfall at all”. Since all observations have audio file, it will be fairly easy create waterfalls at any time. Moreover they could be created on-demand on server, rather than on slow raspberry pi.

The difference between the generated waterfall and the one that you get from the audio is that the first one covers bigger bandwidth.

The goal behind this is to check if the frequency of a transmitter has been drifted so much that it gets outside the audio bandwidth and fix it if this behaviour is repeated.

You can check that by going to an observation and compare waterfall with the spectrogram of the audio in audio tab.

Totally agree on this. In general, audio may generated after some corrections, eg filtering, costas loop etc, whereas the spectrum waterfall is generated right after the doppler compensation.

Server side processing, should be avoided at all cost to my opinion. The server is only one (at least for now :P). On the other hand, distributing the workload, even in low-end processing units, is extremely efficient. It may take longer. That’s true. But these units most of the time are idle. Why not use them? One will argue that it should be ready for a next observation. That’s also true. But for now,you can just prioritize observations. In the future if the observations are dense enough, we can make the client to postpone the processing and upload until it becomes idle.

But again, this thread was about testing the new waterfall. You can create a new one about the data size and your concerns.

A post was split to a new topic: Security of uploaded images

Put some comments for branch. Not sure if they got through, as I can comment only commits, not the results.

Any idea when this might be ready?