Strf on Adalm Pluto, much to much data in bin files

Hi, I am wondering if anyone else here is running strf with Adalm Pluto.

My problem is that with the settings as advised over here GitHub - cbassa/strf: Radio Frequency Satellite Tracking ( where -t is not set and reported to be 1 when the strf starts, I get way to much data in 60 bin files. If I call them with -s 0 and -s 3600 I get about 4 hours of signals in one screen which does not tab nicely and needs a lot of zoom to identify signals.

In nice coöperation with PE0SAT I have set -t now at 0.25 giving me 1 hour of signal in 60 bin files which gives very much more pleasure tabbing and backspacint through one window of data compared to 4 hours in the same window.

Can anyone explain why, with PE0SAT running HackRF and -t 1 gets one hour in 60 bin files, and me with Adalm Pluto and -t 1 gets 4 hours in 60 bin files. Both running strf with 18MSPS.

Any help appreciated,

Some details on how I start the STRF observation with my HackRF

The complete script can be found at this location.

The bash function:

start_hackrf () {
# Start observation using a HACKRF
timeout --preserve-status "${DURATION}" \
  "${SDR}" -a "${AMP}" -l "${LNAGAIN}" -x "${VGAGAIN}" -s "${RATE}" -b \
  "${BW}" -f "${FREQ}"e6 -p "${BIAS}" -S "${BSIZE}" -r - 2>/dev/null | \
  "${RFFFT}" -f "${FREQ}"e6 -s "${RATE}" -c "${RFCS}" -t "${RFT}" -F char -q

The bash variables and the case definition:

HACKRFSDR=$(command -v hackrf_transfer)

    echo "${LINE}"
    echo "# Using SDR device ${SDR}"

The cronjob:

00 21 12 12 * ${HOME}/bin/ 3 2240 2880m ${SATOBS} 100 1

Value $1 is the case used, $2 is the center frequency, $3 is the duration of the observation, $4 is the location where the bin files are stored, $5 is the rffft channel size and $6 is the rffft integration time.

The cronjob will run rffft with -c 100 and -t 1 and that gives me an hours overview when I use -l 3600 with rfplot.

Hi Ben,

Can you get the Adalm Pluto to dump the voltages to a file? That way we can check what the bit depth, sampling and packing is before feeding it into STRF.



Sure ! I guess it doesn’t need to be 18 MSPS wide for the test, if only I supply you with the samplerate I did set to dump to the file ?


Which is (1 Gigabyte !) 1 minute output (run as timeout 60s ./ from the next settings:

iio_attr -u ip: -c ad9361-phy RX_LO frequency 2245000000
iio_attr -u ip: -i -c ad9361-phy voltage0 rf_port_select A_BALANCED
iio_attr -u ip: -c ad9361-phy voltage0 rf_bandwidth 2000000
iio_attr -u ip: -c ad9361-phy voltage0 sampling_frequency 2000000
iio_attr -u ip: -c ad9361-phy voltage0 gain_control_mode manual #hybrid
iio_attr -u ip: -i -c ad9361-phy voltage0 hardwaregain 71

read -p “All set Press enter to continue”

iio_readdev -u ip: -b 100000 cf-ad9361-lpc >2msps2245mhz

So, assuming I did all right, 2MSPS output for one minute.


As far as I understand now the -b (buffer size) is not some kind of buffer to prevent data hickups, but it’s the actual capture buffer size. Not sure what that means for my problem however :wink: