Improve EOO TX/RX reliability (#855)

* Create FIFOs with correct sizes to reduce EOO time.

* Try using configured fifo size for all FIFOs.

* Use correct variable names when definining FIFO sizes.

* Update wxWidgets to 3.2.7 to see if that fixes the build failure.

* Use Ubuntu 24.04 for Windows build to match Linux.

* Patch libsamplerate to avoid GH action failure.

* Need to remove 3.1 from samplerate-cmake.patch

* Fix issue causing cmake to hang if being rerun.

* Temporarily use wxWidgets master as latest Xcode can't build 3.2.7.

* Add support for rig control during RADE EOO ctests.

* Try adding a bit of time to see if it'll decode.

* Fix GH action failure.

* Fix Windows build failure.

* Use same delay code as other existing similar logic.

* Add 20ms delay in mock rigctld to better match behavior with actual hardware.

* Fix Windows build issue for real.

* Fix samplerate patch issue.

* Include wxWidgets manifest code since 3.3 will soon require it.

* Use 3.2 branch as we can build on macOS now in that branch.

* Add IAudioDevice function to allow retrieval of device latency.

* Try shrinking the number of samples FreeDVInterface returns for RADE.

* Add missed get() call in PortAudio logic.

* We really shouldn't need to add txIn latency.

* We need to see the logs from when TX happens during ctest.

* Explicitly disable power savings for audio (macOS).

* Allow partial reads from TX output FIFO.

* First pass at calculating latency for WASAPI.

* Try IO frame size of 1024 to improve pass rate of GH Actions.

* Initial implementation of RADE reporting test on Windows.

* Remove unneeded flag from previously added script.

* Fix various issues with PS script.

* Revert "Try IO frame size of 1024 to improve pass rate of GH Actions."

This reverts commit 1161d9505d.

* Use FDV output, not mock rigctl output, for comparison.

* Use GetStreamLatency() instead.

* Add logging to help determine why WASAPI latency is incorrect.

* Need GetDevicePeriod as well for fully accurate latency measurements.

* Buffer size is the minimum bound on latency. Or at least it seems like it would be, anyway.

* Guarantee that we have universal macOS binary even if tests fail.

* Also take into account PTT response time (i.e. for SDRs).

* Only need to add half of the rig response time for good results.

* Forgot implementation of getRigResponseTimeMicroseconds() for OmniRig.

* Prevent negative zero SNRs from appearing in GUI.

* Try smallest buffer size possible for macOS audio.

* Fix macOS compiler error.

* (Windows) Use event based triggering to provide audio to/from FreeDV.

* Divide by number of channels to get actual latency on macOS.

* Increase minimum frame size to 128 on macOS.

* Oops, types need to be the same.

* Fix deadlock in Windows audio from previous commits.

* Try 256 buffer size on macOS.

* Use minimum of 40ms latency on macOS and Windows.

* No need for the samplerate patch anymore.

* Fix comments.
ms-macos-high-sample-rate
Mooneer Salem 2025-04-11 19:33:33 -07:00 committed by GitHub
parent 149d37230b
commit 7cdb9e8a7d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
37 changed files with 1297 additions and 97 deletions

View File

@ -17,6 +17,7 @@ jobs:
os: [macos-13, macos-latest] # x86_64, ARM64
runs-on: ${{ matrix.os }}
needs: dist
steps:
- uses: actions/checkout@v4
@ -83,7 +84,6 @@ jobs:
# Only build and publish universal binary after making sure code works properly on both
# x86 and ARM. No point in doing so if there's a failure on either.
dist:
needs: test
runs-on: macos-latest
steps:

View File

@ -15,7 +15,7 @@ jobs:
# well on Windows or Mac. You can convert this to a matrix build if you need
# cross-platform coverage.
# See: https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
@ -109,12 +109,20 @@ jobs:
run: |
.\FreeDV.exe /S /D=${{github.workspace}}\FreeDV-Install-Location | Out-Null
- name: Copy test script to install folder
- name: Copy test scripts to install folder
shell: pwsh
run: |
Copy-Item -Path ${{github.workspace}}/test/TestFreeDVFullDuplex.ps1 -Destination ${{github.workspace}}\FreeDV-Install-Location\bin
Copy-Item -Path ${{github.workspace}}/test/freedv-ctest-fullduplex.conf.tmpl -Destination ${{github.workspace}}\FreeDV-Install-Location\bin
Copy-Item -Path ${{github.workspace}}/test/TestFreeDVReporting.ps1 -Destination ${{github.workspace}}\FreeDV-Install-Location\bin
Copy-Item -Path ${{github.workspace}}/test/freedv-ctest-reporting.conf.tmpl -Destination ${{github.workspace}}\FreeDV-Install-Location\bin
Copy-Item -Path ${{github.workspace}}/test/hamlibserver.py -Destination ${{github.workspace}}\FreeDV-Install-Location\bin
- name: Install SoX
shell: pwsh
run: |
choco install sox.portable
- name: Install VB-Cable ("Radio" sound device)
uses: LABSN/sound-ci-helpers@v1
@ -155,6 +163,13 @@ jobs:
.\TestFreeDVFullDuplex.ps1 -RadioToComputerDevice "${{env.RADIO_TO_COMPUTER_DEVICE}}" -ComputerToRadioDevice "${{env.COMPUTER_TO_RADIO_DEVICE}}" -MicrophoneToComputerDevice "${{env.MICROPHONE_TO_COMPUTER_DEVICE}}" -ComputerToSpeakerDevice "${{env.COMPUTER_TO_SPEAKER_DEVICE}}" -ModeToTest RADEV1 -NumberOfRuns 1
timeout-minutes: 10
- name: Test RADE Reporting
shell: pwsh
working-directory: ${{github.workspace}}\FreeDV-Install-Location\bin
run: |
.\TestFreeDVReporting.ps1 -RadioToComputerDevice "${{env.RADIO_TO_COMPUTER_DEVICE}}" -ComputerToRadioDevice "${{env.COMPUTER_TO_RADIO_DEVICE}}" -MicrophoneToComputerDevice "${{env.MICROPHONE_TO_COMPUTER_DEVICE}}" -ComputerToSpeakerDevice "${{env.COMPUTER_TO_SPEAKER_DEVICE}}"
timeout-minutes: 10
- name: Test 700D
shell: pwsh
working-directory: ${{github.workspace}}\FreeDV-Install-Location\bin

View File

@ -263,14 +263,13 @@ if(MINGW)
set(USE_INTERNAL_CODEC2 TRUE CACHE BOOL "Perform internal builds of codec2")
# Setup HOST variables.
include(cmake/MinGW.cmake)
# This sets up the exe icon for windows under mingw.
set(RES_FILES "")
# This sets up the exe icon for windows under mingw.
set(RES_FILES "${CMAKE_BINARY_DIR}/freedv.rc")
set(CMAKE_RC_COMPILER_INIT windres)
enable_language(RC)
set(CMAKE_RC_COMPILE_OBJECT
"<CMAKE_RC_COMPILER> <FLAGS> -O coff <DEFINES> -i <SOURCE> -o <OBJECT>")
"<CMAKE_RC_COMPILER> --include-dir ${CMAKE_BINARY_DIR}/_deps/wxwidgets-src/include <FLAGS> -O coff <DEFINES> -i <SOURCE> -o <OBJECT>")
include(InstallRequiredSystemLibraries)
endif(MINGW)

View File

@ -8,7 +8,7 @@ FetchContent_Declare(
GIT_REPOSITORY https://github.com/libsndfile/libsamplerate.git
GIT_SHALLOW TRUE
GIT_PROGRESS TRUE
GIT_TAG 0.2.2
GIT_TAG master
)
FetchContent_GetProperties(samplerate)

View File

@ -1,4 +1,4 @@
set(WXWIDGETS_VERSION "3.2.6")
set(WXWIDGETS_VERSION "3.2.7")
# Ensure that the wxWidgets library is staticly built.
set(wxBUILD_SHARED OFF CACHE BOOL "Build wx libraries as shared libs")
@ -25,7 +25,8 @@ FetchContent_Declare(
GIT_REPOSITORY https://github.com/wxWidgets/wxWidgets.git
GIT_SHALLOW TRUE
GIT_PROGRESS TRUE
GIT_TAG v${WXWIDGETS_VERSION}
#GIT_TAG v${WXWIDGETS_VERSION}
GIT_TAG 3.2
)
FetchContent_GetProperties(wxWidgets)

View File

@ -96,6 +96,8 @@ set( _windlls
bcrypt.dll
IPHLPAPI.DLL
AVRT.dll
gdiplus.dll
MSIMG32.dll
# The below are additional DLLs required when compiled
# using the LLVM version of MinGW.

View File

@ -1,3 +1,7 @@
#define wxUSE_RC_MANIFEST 1
#define wxUSE_DPI_AWARE_MANIFEST 2
#include "wx/msw/wx.rc"
1 VERSIONINFO
FILEVERSION @VERSION_AS_RC@
PRODUCTVERSION @VERSION_AS_RC@

View File

@ -44,6 +44,8 @@ public:
virtual void stop() = 0;
virtual bool isRunning() = 0;
virtual int getLatencyInMicroseconds() = 0;
// Sets user friendly description of device. Not used by all engines.
void setDescription(std::string desc);

View File

@ -39,6 +39,8 @@ public:
virtual void stop() override;
virtual bool isRunning() override;
virtual int getLatencyInMicroseconds() override;
protected:
friend class MacAudioEngine;

View File

@ -26,6 +26,44 @@
#include <future>
#import <AVFoundation/AVFoundation.h>
static OSStatus GetIOBufferFrameSizeRange(AudioObjectID inDeviceID,
UInt32* outMinimum,
UInt32* outMaximum)
{
AudioObjectPropertyAddress theAddress = { kAudioDevicePropertyBufferFrameSizeRange,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster };
AudioValueRange theRange = { 0, 0 };
UInt32 theDataSize = sizeof(AudioValueRange);
OSStatus theError = AudioObjectGetPropertyData(inDeviceID,
&theAddress,
0,
NULL,
&theDataSize,
&theRange);
if(theError == 0)
{
*outMinimum = theRange.mMinimum;
*outMaximum = theRange.mMaximum;
}
return theError;
}
static OSStatus SetCurrentIOBufferFrameSize(AudioObjectID inDeviceID,
UInt32 inIOBufferFrameSize)
{
AudioObjectPropertyAddress theAddress = { kAudioDevicePropertyBufferFrameSize,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster };
return AudioObjectSetPropertyData(inDeviceID,
&theAddress,
0,
NULL,
sizeof(UInt32), &inIOBufferFrameSize);
}
MacAudioDevice::MacAudioDevice(int coreAudioId, IAudioEngine::AudioDirection direction, int numChannels, int sampleRate)
: coreAudioId_(coreAudioId)
, direction_(direction)
@ -89,6 +127,22 @@ void MacAudioDevice::start()
return;
}
// Attempt to set the IO frame size to an optimal value. This hopefully
// reduces dropouts on marginal hardware.
UInt32 minFrameSize = 0;
UInt32 maxFrameSize = 0;
UInt32 desiredFrameSize = 512;
GetIOBufferFrameSizeRange(coreAudioId_, &minFrameSize, &maxFrameSize);
if (minFrameSize != 0 && maxFrameSize != 0)
{
log_info("Frame sizes of %d to %d are supported for audio device ID %d", minFrameSize, maxFrameSize, coreAudioId_);
desiredFrameSize = std::min(maxFrameSize, (UInt32)2048); // TBD: investigate why we need a significantly higher than default.
if (SetCurrentIOBufferFrameSize(coreAudioId_, desiredFrameSize) != noErr)
{
log_warn("Could not set IO frame size to %d for audio device ID %d", desiredFrameSize, coreAudioId_);
}
}
// Initialize audio engine.
AVAudioEngine* engine = [[AVAudioEngine alloc] init];
@ -114,6 +168,24 @@ void MacAudioDevice::start()
return;
}
// If we were able to set the IO frame size above, also set kAudioUnitProperty_MaximumFramesPerSlice
// More info: https://developer.apple.com/library/archive/technotes/tn2321/_index.html
if (desiredFrameSize > 512)
{
error = AudioUnitSetProperty(
audioUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&desiredFrameSize,
sizeof(desiredFrameSize));
if (error != noErr)
{
log_warn("Could not set max frames/slice to %d for audio device ID %d", desiredFrameSize, coreAudioId_);
SetCurrentIOBufferFrameSize(coreAudioId_, 512);
}
}
// Need to also get a mixer node so that the objects get
// created in the right order.
AVAudioMixerNode* mixer =
@ -260,4 +332,96 @@ void MacAudioDevice::stop()
bool MacAudioDevice::isRunning()
{
return engine_ != nil;
}
}
int MacAudioDevice::getLatencyInMicroseconds()
{
std::shared_ptr<std::promise<int>> prom = std::make_shared<std::promise<int> >();
auto fut = prom->get_future();
enqueue_([&, prom]() {
// Total latency is based on the formula:
// kAudioDevicePropertyLatency + kAudioDevicePropertySafetyOffset +
// kAudioDevicePropertyBufferFrameSize + kAudioStreamPropertyLatency.
// This is in terms of the number of samples. Divide by sample rate and number of channels to get number of seconds.
// More info:
// https://stackoverflow.com/questions/65600996/avaudioengine-reconcile-sync-input-output-timestamps-on-macos-ios
// https://forum.juce.com/t/macos-round-trip-latency/45278
auto scope =
(direction_ == IAudioEngine::AUDIO_ENGINE_IN) ?
kAudioDevicePropertyScopeInput :
kAudioDevicePropertyScopeOutput;
// Get audio device latency
AudioObjectPropertyAddress propertyAddress = {
kAudioDevicePropertyLatency,
scope,
kAudioObjectPropertyElementMaster};
UInt32 deviceLatencyFrames = 0;
UInt32 size = sizeof(deviceLatencyFrames);
OSStatus result = AudioObjectGetPropertyData(
coreAudioId_,
&propertyAddress,
0,
nullptr,
&size,
&deviceLatencyFrames); // assume 0 if we can't retrieve for some reason
UInt32 deviceSafetyOffset = 0;
propertyAddress.mSelector = kAudioDevicePropertySafetyOffset;
result = AudioObjectGetPropertyData(
coreAudioId_,
&propertyAddress,
0,
nullptr,
&size,
&deviceSafetyOffset);
UInt32 bufferFrameSize = 0;
propertyAddress.mSelector = kAudioDevicePropertyBufferFrameSize;
result = AudioObjectGetPropertyData(
coreAudioId_,
&propertyAddress,
0,
nullptr,
&size,
&bufferFrameSize);
propertyAddress.mSelector = kAudioDevicePropertyStreams;
size = 0;
result = AudioObjectGetPropertyDataSize(
coreAudioId_,
&propertyAddress,
0,
nullptr,
&size);
UInt32 streamLatency = 0;
if (result == noErr)
{
AudioStreamID streams[size / sizeof(AudioStreamID)];
result = AudioObjectGetPropertyData(
coreAudioId_,
&propertyAddress,
0,
nullptr,
&size,
&streams);
if (result == noErr)
{
propertyAddress.mSelector = kAudioStreamPropertyLatency;
size = sizeof(streamLatency);
result = AudioObjectGetPropertyData(
streams[0],
&propertyAddress,
0,
nullptr,
&size,
&streamLatency);
}
}
auto ioLatency = streamLatency + deviceLatencyFrames + deviceSafetyOffset;
auto frameSize = bufferFrameSize;
prom->set_value(1000000 * (ioLatency + frameSize) / sampleRate_);
});
return fut.get();
}

View File

@ -167,6 +167,18 @@ void PortAudioDevice::stop()
}
}
int PortAudioDevice::getLatencyInMicroseconds()
{
int latency = 0;
if (deviceStream_ != nullptr)
{
auto streamInfo = portAudioLibrary_->GetStreamInfo(deviceStream_).get();
latency = 1000000 * (direction_ == IAudioEngine::AUDIO_ENGINE_IN ? streamInfo->inputLatency : streamInfo->outputLatency);
}
return latency;
}
int PortAudioDevice::OnPortAudioStreamCallback_(const void *input, void *output, unsigned long frameCount, const PaStreamCallbackTimeInfo *timeInfo, PaStreamCallbackFlags statusFlags, void *userData)
{
PortAudioDevice* thisObj = static_cast<PortAudioDevice*>(userData);

View File

@ -41,6 +41,8 @@ public:
virtual bool isRunning() override;
virtual int getLatencyInMicroseconds() override;
protected:
// PortAudioDevice cannot be created directly, only via PortAudioEngine.
friend class PortAudioEngine;

View File

@ -172,4 +172,14 @@ std::future<PaError> PortAudioInterface::CloseStream(PaStream *stream)
prom->set_value(Pa_CloseStream(stream));
});
return fut;
}
std::future<const PaStreamInfo*> PortAudioInterface::GetStreamInfo(PaStream* stream)
{
std::shared_ptr<std::promise<const PaStreamInfo*> > prom = std::make_shared<std::promise<const PaStreamInfo*> >();
auto fut = prom->get_future();
enqueue_([=]() {
prom->set_value(Pa_GetStreamInfo(stream));
});
return fut;
}

View File

@ -51,6 +51,8 @@ public:
std::future<PaError> StartStream(PaStream *stream);
std::future<PaError> StopStream(PaStream *stream);
std::future<PaError> CloseStream(PaStream *stream);
std::future<const PaStreamInfo*> GetStreamInfo(PaStream* stream);
};
#endif // PORT_AUDIO_INTERFACE_H

View File

@ -310,6 +310,17 @@ void PulseAudioDevice::stop()
}
}
int PulseAudioDevice::getLatencyInMicroseconds()
{
pa_usec_t latency = 0;
if (stream_ != nullptr)
{
int neg = 0;
pa_stream_get_latency(stream_, &latency, &neg); // ignore error and assume 0
}
return (int)latency;
}
void PulseAudioDevice::StreamReadCallback_(pa_stream *s, size_t length, void *userdata)
{
const void* data = nullptr;

View File

@ -44,6 +44,8 @@ public:
virtual bool isRunning() override;
virtual int getLatencyInMicroseconds() override;
protected:
// PulseAudioDevice cannot be created directly, only via PulseAudioEngine.
friend class PulseAudioEngine;

View File

@ -44,6 +44,9 @@ WASAPIAudioDevice::WASAPIAudioDevice(IAudioClient* client, IAudioEngine::AudioDi
, bufferFrameCount_(0)
, initialized_(false)
, lowLatencyTask_(nullptr)
, latencyFrames_(0)
, renderCaptureEvent_(nullptr)
, isRenderCaptureRunning_(false)
{
// empty
}
@ -99,7 +102,7 @@ void WASAPIAudioDevice::start()
// Initialize the audio client with the above format
HRESULT hr = client_->Initialize(
AUDCLNT_SHAREMODE_SHARED,
0,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
BLOCK_TIME_NS / NS_PER_REFTIME, // REFERENCE_TIME is in 100ns units
0,
&streamFormat,
@ -119,8 +122,39 @@ void WASAPIAudioDevice::start()
initialized_ = true;
}
// Create render/capture event
renderCaptureEvent_ = CreateEvent(nullptr, false, false, nullptr);
if (renderCaptureEvent_ == nullptr)
{
std::stringstream ss;
ss << "Could not create event (hr = " << GetLastError() << ")";
log_error(ss.str().c_str());
if (onAudioErrorFunction)
{
onAudioErrorFunction(*this, ss.str(), onAudioErrorState);
}
prom->set_value();
return;
}
// Assign render/capture event
HRESULT hr = client_->SetEventHandle(renderCaptureEvent_);
if (FAILED(hr))
{
std::stringstream ss;
ss << "Could not assign event handle (hr = " << hr << ")";
log_error(ss.str().c_str());
if (onAudioErrorFunction)
{
onAudioErrorFunction(*this, ss.str(), onAudioErrorState);
}
CloseHandle(renderCaptureEvent_);
prom->set_value();
return;
}
// Get actual allocated buffer size
HRESULT hr = client_->GetBufferSize(&bufferFrameCount_);
hr = client_->GetBufferSize(&bufferFrameCount_);
if (FAILED(hr))
{
std::stringstream ss;
@ -133,6 +167,23 @@ void WASAPIAudioDevice::start()
prom->set_value();
return;
}
log_info("Allocated %d frames for audio buffers", bufferFrameCount_);
// Get latency
latencyFrames_ = bufferFrameCount_;
REFERENCE_TIME latency = 0;
hr = client_->GetStreamLatency(&latency);
if (FAILED(hr))
{
std::stringstream ss;
ss << "Could not get latency (hr = " << hr << ")";
log_warn(ss.str().c_str());
}
else
{
latencyFrames_ += sampleRate_ * ((double)(NS_PER_REFTIME * latency) / 1e9);
}
// Get capture/render client
if (direction_ == IAudioEngine::AUDIO_ENGINE_IN)
@ -201,26 +252,6 @@ void WASAPIAudioDevice::start()
prom->set_value();
return;
}
// Queue render handler
enqueue_([&]() {
renderAudio_();
});
}
else
{
// Queue capture handler
enqueue_([&]() {
captureAudio_();
});
}
// Temporarily raise priority of task
DWORD taskIndex = 0;
lowLatencyTask_ = AvSetMmThreadCharacteristics(TEXT("Pro Audio"), &taskIndex);
if (lowLatencyTask_ == nullptr)
{
log_warn("Could not increase thread priority");
}
// Start render/capture
@ -244,17 +275,53 @@ void WASAPIAudioDevice::start()
captureClient_->Release();
captureClient_ = nullptr;
}
}
// Start render/capture thread.
isRenderCaptureRunning_ = true;
renderCaptureThread_ = std::thread([&]() {
log_info("Starting render/capture thread");
HRESULT res = CoInitializeEx(nullptr, COINIT_MULTITHREADED | COINIT_DISABLE_OLE1DDE);
if (FAILED(res))
{
log_warn("Could not initialize COM (res = %d)", res);
}
// Temporarily raise priority of task
DWORD taskIndex = 0;
lowLatencyTask_ = AvSetMmThreadCharacteristics(TEXT("Pro Audio"), &taskIndex);
if (lowLatencyTask_ == nullptr)
{
log_warn("Could not increase thread priority");
}
while (isRenderCaptureRunning_ && WaitForSingleObject(renderCaptureEvent_, 100) == WAIT_OBJECT_0)
{
if (direction_ == IAudioEngine::AUDIO_ENGINE_OUT)
{
renderAudio_();
}
else
{
captureAudio_();
}
}
log_info("Exiting render/capture thread");
if (lowLatencyTask_ != nullptr)
{
AvRevertMmThreadCharacteristics(lowLatencyTask_);
lowLatencyTask_ = nullptr;
}
}
lastRenderCaptureTime_ = std::chrono::steady_clock::now();
CoUninitialize();
});
prom->set_value();
});
fut.wait();
}
@ -265,6 +332,12 @@ void WASAPIAudioDevice::stop()
auto prom = std::make_shared<std::promise<void> >();
auto fut = prom->get_future();
enqueue_([&]() {
isRenderCaptureRunning_ = false;
if (renderCaptureThread_.joinable())
{
renderCaptureThread_.join();
}
if (renderClient_ != nullptr || captureClient_ != nullptr)
{
HRESULT hr = client_->Stop();
@ -278,11 +351,6 @@ void WASAPIAudioDevice::stop()
onAudioErrorFunction(*this, ss.str(), onAudioErrorState);
}
}
if (lowLatencyTask_ != nullptr)
{
AvRevertMmThreadCharacteristics(lowLatencyTask_);
}
}
if (renderClient_ != nullptr)
@ -295,7 +363,7 @@ void WASAPIAudioDevice::stop()
captureClient_->Release();
captureClient_ = nullptr;
}
prom->set_value();
});
fut.wait();
@ -306,6 +374,13 @@ bool WASAPIAudioDevice::isRunning()
return (renderClient_ != nullptr) || (captureClient_ != nullptr);
}
int WASAPIAudioDevice::getLatencyInMicroseconds()
{
// Note: latencyFrames_ isn't expected to change, so we don't need to
// wrap this call in an enqueue_() like with the other public methods.
return 1000000 * latencyFrames_ / sampleRate_;
}
void WASAPIAudioDevice::renderAudio_()
{
// If client is no longer available, abort
@ -314,10 +389,6 @@ void WASAPIAudioDevice::renderAudio_()
return;
}
// Sleep 1/2 of the buffer duration
int sleepDurationInMsec = 1000 * (double)bufferFrameCount_ / sampleRate_ / 2;
std::this_thread::sleep_until(lastRenderCaptureTime_ + std::chrono::milliseconds(sleepDurationInMsec));
// Get available buffer space
UINT32 padding = 0;
UINT32 framesAvailable = 0;
@ -330,7 +401,7 @@ void WASAPIAudioDevice::renderAudio_()
std::stringstream ss;
ss << "Could not get current padding (hr = " << hr << ")";
log_error(ss.str().c_str());
goto render_again;
return;
}
framesAvailable = bufferFrameCount_ - padding;
@ -342,7 +413,7 @@ void WASAPIAudioDevice::renderAudio_()
std::stringstream ss;
ss << "Could not get render buffer (hr = " << hr << ")";
log_error(ss.str().c_str());
goto render_again;
return;
}
// Grab audio data from higher level code
@ -361,15 +432,8 @@ void WASAPIAudioDevice::renderAudio_()
std::stringstream ss;
ss << "Could not release render buffer (hr = " << hr << ")";
log_error(ss.str().c_str());
goto render_again;
return;
}
render_again:
enqueue_([&]() {
renderAudio_();
});
lastRenderCaptureTime_ = std::chrono::steady_clock::now();
}
void WASAPIAudioDevice::captureAudio_()
@ -380,10 +444,6 @@ void WASAPIAudioDevice::captureAudio_()
return;
}
// Sleep 1/2 of the buffer duration
int sleepDurationInMsec = 1000 * (double)bufferFrameCount_ / sampleRate_ / 2;
std::this_thread::sleep_until(lastRenderCaptureTime_ + std::chrono::milliseconds(sleepDurationInMsec));
// Get packet length
UINT32 packetLength = 0;
HRESULT hr = captureClient_->GetNextPacketSize(&packetLength);
@ -394,7 +454,7 @@ void WASAPIAudioDevice::captureAudio_()
std::stringstream ss;
ss << "Could not get packet length (hr = " << hr << ")";
log_error(ss.str().c_str());
goto capture_again;
return;
}
while(packetLength != 0)
@ -416,7 +476,7 @@ void WASAPIAudioDevice::captureAudio_()
std::stringstream ss;
ss << "Could not get capture buffer (hr = " << hr << ")";
log_error(ss.str().c_str());
goto capture_again;
return;
}
// Fill buffer with silence if told to do so.
@ -440,7 +500,7 @@ void WASAPIAudioDevice::captureAudio_()
std::stringstream ss;
ss << "Could not release capture buffer (hr = " << hr << ")";
log_error(ss.str().c_str());
goto capture_again;
return;
}
hr = captureClient_->GetNextPacketSize(&packetLength);
@ -451,14 +511,7 @@ void WASAPIAudioDevice::captureAudio_()
std::stringstream ss;
ss << "Could not get packet length (hr = " << hr << ")";
log_error(ss.str().c_str());
goto capture_again;
return;
}
}
capture_again:
enqueue_([&]() {
captureAudio_();
});
lastRenderCaptureTime_ = std::chrono::steady_clock::now();
}

View File

@ -27,6 +27,7 @@
#include <vector>
#include <functional>
#include <chrono>
#include <thread>
#include <initguid.h>
#include <mmdeviceapi.h>
#include <audioclient.h>
@ -47,6 +48,8 @@ public:
virtual bool isRunning() override;
virtual int getLatencyInMicroseconds() override;
protected:
friend class WASAPIAudioEngine;
@ -61,8 +64,11 @@ private:
int numChannels_;
UINT32 bufferFrameCount_;
bool initialized_;
std::chrono::time_point<std::chrono::steady_clock> lastRenderCaptureTime_;
HANDLE lowLatencyTask_;
int latencyFrames_;
std::thread renderCaptureThread_;
HANDLE renderCaptureEvent_;
bool isRenderCaptureRunning_;
void renderAudio_();
void captureAudio_();

View File

@ -541,7 +541,7 @@ int FreeDVInterface::getTxNumSpeechSamples() const
{
if (txMode_ >= FREEDV_MODE_RADE)
{
return 1920;
return LPCNET_FRAME_SIZE;
}
assert(currentTxMode_ != nullptr);
@ -552,7 +552,7 @@ int FreeDVInterface::getTxNNomModemSamples() const
{
if (txMode_ >= FREEDV_MODE_RADE)
{
return 960;
return rade_n_tx_out(rade_);
}
assert(currentTxMode_ != nullptr);

View File

@ -38,5 +38,7 @@
<string>NSApplication</string>
<key>NSRequiresAquaSystemAppearance</key>
<@DARK_MODE_DISABLE@ />
<key>AudioHardwarePowerHint</key>
<string>None</string>
</dict>
</plist>
</plist>

View File

@ -397,6 +397,9 @@ void MainApp::UnitTest_()
}
}
}
// Wait a second to make sure we're not doing any more processing
std::this_thread::sleep_for(1000ms);
// Fire event to stop FreeDV
log_info("Firing stop");
@ -1704,7 +1707,7 @@ void MainFrame::OnTimer(wxTimerEvent &evt)
if (snr_limited < -5.0) snr_limited = -5.0;
if (snr_limited > 40.0) snr_limited = 40.0;
char snr[15];
snprintf(snr, 15, "%4.0f dB", g_snr);
snprintf(snr, 15, "%d dB", (int)(g_snr + 0.5));
if (freedvInterface.getSync())
{
@ -2883,12 +2886,19 @@ void MainFrame::stopRxStream()
void MainFrame::destroy_fifos(void)
{
codec2_fifo_destroy(g_rxUserdata->infifo1);
codec2_fifo_destroy(g_rxUserdata->outfifo1);
if (g_rxUserdata->infifo1) codec2_fifo_destroy(g_rxUserdata->infifo1);
if (g_rxUserdata->outfifo1) codec2_fifo_destroy(g_rxUserdata->outfifo1);
if (g_rxUserdata->infifo2) codec2_fifo_destroy(g_rxUserdata->infifo2);
if (g_rxUserdata->outfifo2) codec2_fifo_destroy(g_rxUserdata->outfifo2);
codec2_fifo_destroy(g_rxUserdata->rxinfifo);
codec2_fifo_destroy(g_rxUserdata->rxoutfifo);
g_rxUserdata->infifo1 = nullptr;
g_rxUserdata->infifo2 = nullptr;
g_rxUserdata->outfifo1 = nullptr;
g_rxUserdata->outfifo2 = nullptr;
g_rxUserdata->rxinfifo = nullptr;
g_rxUserdata->rxoutfifo = nullptr;
}
//-------------------------------------------------------------------------
@ -3114,21 +3124,28 @@ void MainFrame::startRxStream()
// loop.
int m_fifoSize_ms = wxGetApp().appConfiguration.fifoSizeMs;
int soundCard1InFifoSizeSamples = wxGetApp().appConfiguration.audioConfiguration.soundCard1In.sampleRate;
int soundCard1OutFifoSizeSamples = wxGetApp().appConfiguration.audioConfiguration.soundCard1Out.sampleRate;
g_rxUserdata->infifo1 = codec2_fifo_create(soundCard1InFifoSizeSamples);
g_rxUserdata->outfifo1 = codec2_fifo_create(soundCard1OutFifoSizeSamples);
int soundCard1InFifoSizeSamples = m_fifoSize_ms*wxGetApp().appConfiguration.audioConfiguration.soundCard1In.sampleRate / 1000;
int soundCard1OutFifoSizeSamples = m_fifoSize_ms*wxGetApp().appConfiguration.audioConfiguration.soundCard1Out.sampleRate / 1000;
if (txInSoundDevice && txOutSoundDevice)
{
int soundCard2InFifoSizeSamples = m_fifoSize_ms*wxGetApp().appConfiguration.audioConfiguration.soundCard2In.sampleRate / 1000;
int soundCard2OutFifoSizeSamples = m_fifoSize_ms*wxGetApp().appConfiguration.audioConfiguration.soundCard2Out.sampleRate / 1000;
g_rxUserdata->outfifo2 = codec2_fifo_create(soundCard2OutFifoSizeSamples);
g_rxUserdata->outfifo1 = codec2_fifo_create(soundCard1OutFifoSizeSamples);
g_rxUserdata->infifo2 = codec2_fifo_create(soundCard2InFifoSizeSamples);
g_rxUserdata->infifo1 = codec2_fifo_create(soundCard1InFifoSizeSamples);
g_rxUserdata->outfifo2 = codec2_fifo_create(soundCard2OutFifoSizeSamples);
log_debug("fifoSize_ms: %d infifo2: %d/outfilo2: %d",
wxGetApp().appConfiguration.fifoSizeMs.get(), soundCard2InFifoSizeSamples, soundCard2OutFifoSizeSamples);
}
else
{
g_rxUserdata->infifo1 = codec2_fifo_create(soundCard1InFifoSizeSamples);
g_rxUserdata->outfifo1 = codec2_fifo_create(soundCard1OutFifoSizeSamples);
g_rxUserdata->infifo2 = nullptr;
g_rxUserdata->outfifo2 = nullptr;
}
log_debug("fifoSize_ms: %d infifo1: %d/outfilo1 %d",
wxGetApp().appConfiguration.fifoSizeMs.get(), soundCard1InFifoSizeSamples, soundCard1OutFifoSizeSamples);
@ -3312,13 +3329,15 @@ void MainFrame::startRxStream()
paCallBackData* cbData = static_cast<paCallBackData*>(state);
short* audioData = static_cast<short*>(data);
short outdata[size];
int available = std::min(codec2_fifo_used(cbData->outfifo1), (int)size);
int result = codec2_fifo_read(cbData->outfifo1, outdata, size);
int result = codec2_fifo_read(cbData->outfifo1, outdata, available);
if (result == 0)
{
// write signal to all channels to start. This is so that
// the compiler can optimize for the most common case.
for(size_t i = 0; i < size; i++, audioData += dev.getNumChannels())
for(size_t i = 0; i < available; i++, audioData += dev.getNumChannels())
{
for (auto j = 0; j < dev.getNumChannels(); j++)
{
@ -3337,6 +3356,11 @@ void MainFrame::startRxStream()
audioData[0] = VOX_TONE_AMP*cos(cbData->voxTonePhase);
}
}
if (size != available)
{
g_outfifo1_empty++;
}
}
else
{

View File

@ -224,7 +224,7 @@ void MainFrame::OnToolsOptions(wxCommandEvent& event)
// Update voice keyer file if different
wxFileName fullVKPath(wxGetApp().appConfiguration.voiceKeyerWaveFilePath, wxGetApp().appConfiguration.voiceKeyerWaveFile);
if (vkFileName_ != fullVKPath.GetFullPath().mb_str())
if (wxString::FromUTF8(vkFileName_) != fullVKPath.GetFullPath())
{
// Clear filename to force reselection next time VK is triggered.
vkFileName_ = "";
@ -888,6 +888,43 @@ void MainFrame::togglePTT(void) {
wxGetApp().Yield(true);
}
// Wait for a minimum amount of time before stopping TX to ensure that
// remaining audio gets piped to the radio from the operating system.
auto latency = txOutSoundDevice->getLatencyInMicroseconds();
// Also take into account any latency between the computer and radio.
// The only way to do this is by tracking how long it takes to respond
// to PTT requests (and that's not necessarily great, either). Normally
// this component should be a small part of the overall latency, but it
// could be larger when dealing with SDR radios that are on the network.
//
// Note: This may not provide accurate results until after going from
// TX->RX the first time, but one missed report during a session shouldn't
// be a huge deal.
auto pttController = wxGetApp().rigPttController;
if (pttController)
{
// We only need to worry about the time getting to the radio,
// not the time to get from the radio to us.
latency += pttController->getRigResponseTimeMicroseconds() / 2;
}
log_info("Pausing for a minimum of %d microseconds before TX->RX to allow remaining audio to go out", latency);
before = highResClock.now();
while(true)
{
auto diff = highResClock.now() - before;
if (diff >= std::chrono::microseconds(latency))
{
break;
}
wxThread::Sleep(1);
// Yield() used to avoid lack of UI responsiveness during delay.
wxGetApp().Yield(true);
}
// Wait an additional configured timeframe before actually clearing PTT (below)
if (wxGetApp().appConfiguration.txRxDelayMilliseconds > 0)
{

View File

@ -26,6 +26,7 @@
#include <sstream>
#include <algorithm>
#include <cstring>
#include <chrono>
#include <strings.h>
#include "HamlibRigController.h"
@ -76,6 +77,7 @@ HamlibRigController::HamlibRigController(std::string rigName, std::string serial
, origFreq_(0)
, origMode_(RIG_MODE_NONE)
, freqOnly_(freqOnly)
, rigResponseTime_(0)
{
// Perform initial load of rig list if this is our first time being created.
InitializeHamlibLibrary();
@ -97,6 +99,7 @@ HamlibRigController::HamlibRigController(int rigIndex, std::string serialPort, c
, origFreq_(0)
, origMode_(RIG_MODE_NONE)
, freqOnly_(freqOnly)
, rigResponseTime_(0)
{
// Perform initial load of rig list if this is our first time being created.
InitializeHamlibLibrary();
@ -226,6 +229,11 @@ void HamlibRigController::requestCurrentFrequencyMode()
enqueue_(std::bind(&HamlibRigController::requestCurrentFrequencyModeImpl_, this));
}
int HamlibRigController::getRigResponseTimeMicroseconds()
{
return rigResponseTime_;
}
int HamlibRigController::RigNameToIndex(std::string rigName)
{
InitializeHamlibLibrary();
@ -412,11 +420,16 @@ void HamlibRigController::pttImpl_(bool state)
ptt_t on = state ? RIG_PTT_ON : RIG_PTT_OFF;
auto oldTime = std::chrono::steady_clock::now();
int result = RIG_OK;
if (pttType_ != PTT_VIA_NONE)
{
result = rig_set_ptt(rig_, RIG_VFO_CURR, on);
}
auto newTime = std::chrono::steady_clock::now();
auto totalTimeMicroseconds = (int)std::chrono::duration_cast<std::chrono::microseconds>(newTime - oldTime).count();
rigResponseTime_ = std::max(rigResponseTime_, totalTimeMicroseconds);
if (result != RIG_OK)
{
log_debug("rig_set_ptt: error = %s ", rigerror(result));

View File

@ -63,6 +63,8 @@ public:
static int RigNameToIndex(std::string rigName);
static std::string RigIndexToName(unsigned int rigIndex);
static int GetNumberSupportedRadios();
virtual int getRigResponseTimeMicroseconds() override;
private:
using RigList = std::vector<const struct rig_caps *>;
@ -85,6 +87,8 @@ private:
rmode_t origMode_;
bool freqOnly_;
int rigResponseTime_;
vfo_t getCurrentVfo_();
void setFrequencyHelper_(vfo_t currVfo, uint64_t frequencyHz);
void setModeHelper_(vfo_t currVfo, rmode_t mode);

View File

@ -35,6 +35,8 @@ public:
virtual void ptt(bool state) = 0;
virtual int getRigResponseTimeMicroseconds() = 0;
protected:
IRigPttController() = default;
};

View File

@ -34,6 +34,7 @@ public:
virtual ~SerialPortInRigController();
virtual void ptt(bool state) override { /* does not support output */ }
virtual int getRigResponseTimeMicroseconds() override { return 0; /* no support for output */ }
private:
std::thread pollThread_;

View File

@ -28,6 +28,7 @@ SerialPortOutRigController::SerialPortOutRigController(std::string serialPort, b
, rtsPos_(RTSPos)
, useDTR_(useDTR)
, dtrPos_(DTRPos)
, rigResponseTime_(0)
{
// Ensure that PTT is disabled on successful connect.
onRigConnected += [&](IRigController*) {
@ -46,6 +47,11 @@ void SerialPortOutRigController::ptt(bool state)
enqueue_(std::bind(&SerialPortOutRigController::pttImpl_, this, state));
}
int SerialPortOutRigController::getRigResponseTimeMicroseconds()
{
return rigResponseTime_;
}
void SerialPortOutRigController::pttImpl_(bool state)
{
/* Truth table:
@ -62,6 +68,7 @@ void SerialPortOutRigController::pttImpl_(bool state)
if (serialPortHandle_ != COM_HANDLE_INVALID)
{
auto oldTime = std::chrono::steady_clock::now();
if (useRTS_) {
if (state == rtsPos_)
raiseRTS_();
@ -74,6 +81,9 @@ void SerialPortOutRigController::pttImpl_(bool state)
else
lowerDTR_();
}
auto newTime = std::chrono::steady_clock::now();
auto totalTimeMicroseconds = (int)std::chrono::duration_cast<std::chrono::microseconds>(newTime - oldTime).count();
rigResponseTime_ = std::max(rigResponseTime_, totalTimeMicroseconds);
onPttChange(this, state);
}

View File

@ -33,12 +33,15 @@ public:
virtual ~SerialPortOutRigController();
virtual void ptt(bool state) override;
virtual int getRigResponseTimeMicroseconds() override;
private:
bool useRTS_;
bool rtsPos_;
bool useDTR_;
bool dtrPos_;
int rigResponseTime_;
void pttImpl_(bool state);
};

View File

@ -43,6 +43,7 @@ OmniRigController::OmniRigController(int rigId, bool restoreOnDisconnect, bool f
, restoreOnDisconnect_(restoreOnDisconnect)
, writableParams_(0)
, freqOnly_(freqOnly)
, rigResponseTime_(0)
{
// empty
}
@ -101,6 +102,11 @@ void OmniRigController::requestCurrentFrequencyMode()
enqueue_(std::bind(&OmniRigController::requestCurrentFrequencyModeImpl_, this));
}
int OmniRigController::getRigResponseTimeMicroseconds()
{
return rigResponseTime_;
}
void OmniRigController::connectImpl_()
{
// Ensure that COM is properly initialized.
@ -180,7 +186,12 @@ void OmniRigController::pttImpl_(bool state)
{
if (rig_ != nullptr)
{
auto oldTime = std::chrono::steady_clock::now();
rig_->put_Tx(state ? PM_TX : PM_RX);
auto newTime = std::chrono::steady_clock::now();
auto totalTimeMicroseconds = (int)std::chrono::duration_cast<std::chrono::microseconds>(newTime - oldTime).count();
rigResponseTime_ = std::max(rigResponseTime_, totalTimeMicroseconds);
onPttChange(this, state);
}
}

View File

@ -41,6 +41,8 @@ public:
virtual void setFrequency(uint64_t frequency) override;
virtual void setMode(IRigFrequencyController::Mode mode) override;
virtual void requestCurrentFrequencyMode() override;
virtual int getRigResponseTimeMicroseconds() override;
private:
int rigId_; // can be either 0 or 1 (Rig 1 or 2)
@ -53,6 +55,7 @@ private:
bool restoreOnDisconnect_;
long writableParams_; // used to help determine VFO
bool freqOnly_;
int rigResponseTime_;
void connectImpl_();
void disconnectImpl_();

View File

@ -106,7 +106,7 @@ public:
m_tabs->Refresh();
m_tabs->Update();
wxAuiNotebookPageArray& pages = m_tabs->GetPages();
auto& pages = m_tabs->GetPages();
size_t i, page_count = pages.GetCount();
for (i = 0; i < page_count; ++i)

View File

@ -218,7 +218,7 @@ int MainFrame::VoiceKeyerStartTx(void)
SNDFILE* tmpPlayFile = sf_open(vkFileName_.c_str(), SFM_READ, &sfInfo);
if(tmpPlayFile == NULL) {
wxString strErr = sf_strerror(NULL);
wxMessageBox(strErr, wxT("Couldn't open:") + vkFileName_, wxOK);
wxMessageBox(strErr, wxT("Couldn't open:") + wxString::FromUTF8(vkFileName_), wxOK);
next_state = VK_IDLE;
m_togBtnVoiceKeyer->SetBackgroundColour(wxNullColour);
m_togBtnVoiceKeyer->SetValue(false);
@ -237,7 +237,7 @@ int MainFrame::VoiceKeyerStartTx(void)
g_sfPlayFile = tmpPlayFile;
SetStatusText(wxT("Voice Keyer: Playing file ") + vkFileName_ + wxT(" to mic input") , 0);
SetStatusText(wxT("Voice Keyer: Playing file ") + wxString::FromUTF8(vkFileName_) + wxT(" to mic input") , 0);
g_loopPlayFileToMicIn = false;
g_playFileToMicIn = true;

View File

@ -0,0 +1,184 @@
<#
.SYNOPSIS
Executes RADE EOO test of FreeDV.
.DESCRIPTION
This script starts FreeDV in TX mode for approximately 5 seconds using an autogenerated configuration file
that will access the audio devices passed in. The audio output from TX is saved to a temporary file using the SoX
tool. After 5 seconds, FreeDV will terminate and restart in RX mode using the recorded temporary file. This script
will examine the output to determine whether it is able to properly decode the test callsign. If the callsign
does not appear in the logs, the test is marked as having failed.
.INPUTS
None. You can't pipe objects to this script.
.OUTPUTS
The script outputs test status to the console.
.EXAMPLE
PS> .\TestFreeDVReporting.ps1 -RadioToComputerDevice "Microphone (USB Audio CODEC)" -ComputerToRadioDevice "Speakers (USB Audio CODEC)" -ComputerToSpeakerDevice "Speakers (Realtek High Definition Audio(SST))" -MicrophoneToComputerDevice "Microphone Array (Realtek High Definition Audio(SST))"
#>
param (
[Parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string]
# The sound device to receive RX audio from.
$RadioToComputerDevice,
[Parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string]
# The sound device to emit decoded audio to.
$ComputerToSpeakerDevice,
[Parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string]
# The sound device to receive analog audio from.
$MicrophoneToComputerDevice,
[Parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string]
# The sound device to emit TX audio to.
$ComputerToRadioDevice)
<#
.Description
Performs the actual test with FreeDV by generating the needed configuration file, starting FreeDV and then examining the output.
#>
function Test-FreeDV {
param (
$RadioToComputerDevice,
$ComputerToSpeakerDevice,
$MicrophoneToComputerDevice,
$ComputerToRadioDevice
)
$current_loc = Get-Location
# Generate new conf
$conf_tmpl = Get-Content "$current_loc\freedv-ctest-reporting.conf.tmpl"
$conf_tmpl = $conf_tmpl.Replace("@FREEDV_RADIO_TO_COMPUTER_DEVICE@", $RadioToComputerDevice)
$conf_tmpl = $conf_tmpl.Replace("@FREEDV_COMPUTER_TO_RADIO_DEVICE@", $ComputerToRadioDevice)
$conf_tmpl = $conf_tmpl.Replace("@FREEDV_MICROPHONE_TO_COMPUTER_DEVICE@", $MicrophoneToComputerDevice)
$conf_tmpl = $conf_tmpl.Replace("@FREEDV_COMPUTER_TO_SPEAKER_DEVICE@", $ComputerToSpeakerDevice)
$tmp_file = New-TemporaryFile
$conf_tmpl | Set-Content -Path $tmp_file.FullName
# Start SoX
$soxPsi = New-Object System.Diagnostics.ProcessStartInfo
$soxPsi.CreateNoWindow = $true
$soxPsi.UseShellExecute = $false
$soxPsi.RedirectStandardError = $false
$soxPsi.RedirectStandardOutput = $false
$soxPsi.FileName = "sox.exe"
$soxPsi.WorkingDirectory = $current_loc
$quoted_device = "`"" + $RadioToComputerDevice + "`""
$soxPsi.Arguments = @("-t waveaudio $quoted_device -c 1 -r 8000 -t wav `"$current_loc\test.wav`"")
$soxProcess = New-Object System.Diagnostics.Process
$soxProcess.StartInfo = $soxPsi
[void]$soxProcess.Start()
# Start mock rigctld
$rigctlPsi = New-Object System.Diagnostics.ProcessStartInfo
$rigctlPsi.CreateNoWindow = $true
$rigctlPsi.UseShellExecute = $false
$rigctlPsi.RedirectStandardError = $true
$rigctlPsi.RedirectStandardOutput = $true
$rigctlPsi.FileName = "$current_loc\python.exe"
$rigctlPsi.WorkingDirectory = $current_loc
$quoted_tmp_filename = "`"" + "hamlibserver.py" + "`""
$rigctlPsi.Arguments = @("$quoted_tmp_filename " + $soxProcess.Id)
$rigctlProcess = New-Object System.Diagnostics.Process
$rigctlProcess.StartInfo = $rigctlPsi
[void]$rigctlProcess.Start()
# Start freedv.exe
$psi = New-Object System.Diagnostics.ProcessStartInfo
$psi.CreateNoWindow = $true
$psi.UseShellExecute = $false
$psi.RedirectStandardError = $true
$psi.RedirectStandardOutput = $true
$psi.FileName = "$current_loc\freedv.exe"
$psi.WorkingDirectory = $current_loc
$quoted_tmp_filename = "`"" + $tmp_file.FullName + "`""
$psi.Arguments = @("/f $quoted_tmp_filename /ut tx /utmode RADEV1 /txtime 5")
$process = New-Object System.Diagnostics.Process
$process.StartInfo = $psi
[void]$process.Start()
# Read output from first FreeDV run
$err_output = $process.StandardError.ReadToEnd();
$output = $process.StandardOutput.ReadToEnd();
$process.WaitForExit()
Write-Host "$err_output"
# Stop recording audio
try {
$soxProcess.Kill()
} catch {
# Ignore failure as Python could have killed sox
}
$soxProcess.WaitForExit()
# Restart FreeDV in RX mode
$psi.Arguments = @("/f $quoted_tmp_filename /ut rx /utmode RADEV1 /rxfile `"$current_loc\test.wav`"")
$process = New-Object System.Diagnostics.Process
$process.StartInfo = $psi
[void]$process.Start()
# Read output from second FreeDV run
$err_output_fdv = $process.StandardError.ReadToEnd()
$output = $process.StandardOutput.ReadToEnd()
$process.WaitForExit()
Write-Host "$err_output_fdv"
# Kill mock rigctld
$rigctlProcess.Kill()
$err_output = $rigctlProcess.StandardError.ReadToEnd()
$output = $rigctlProcess.StandardOutput.ReadToEnd()
$rigctlProcess.WaitForExit()
Write-Host "$err_output"
# Check for RX callsign
$syncs = ($err_output_fdv -split "`r?`n") | Where { $_.Contains("Reporting callsign ZZ0ZZZ @ SNR") }
if ($syncs.Count -eq 1) {
return $true
}
return $false
}
$passes = 0
$fails = 0
$result = Test-FreeDV `
-RadioToComputerDevice $RadioToComputerDevice `
-ComputerToSpeakerDevice $ComputerToSpeakerDevice `
-MicrophoneToComputerDevice $MicrophoneToComputerDevice `
-ComputerToRadioDevice $ComputerToRadioDevice
if ($result -eq $true)
{
$passes++
}
else
{
$fails++
}
Write-Host "Mode: RADEV1, Passed: $passes, Failures: $fails"
if ($fails -gt 0) {
throw "Test failed"
exit 1
}

View File

@ -0,0 +1,189 @@
FirstTimeUse=0
ExperimentalFeatures=0
[Audio]
soundCard1SampleRate=-1
soundCard2SampleRate=-1
soundCard1InDeviceName=@FREEDV_RADIO_TO_COMPUTER_DEVICE@
soundCard1InSampleRate=48000
soundCard1OutDeviceName=@FREEDV_COMPUTER_TO_RADIO_DEVICE@
soundCard1OutSampleRate=48000
soundCard2InDeviceName=@FREEDV_MICROPHONE_TO_COMPUTER_DEVICE@
soundCard2InSampleRate=48000
soundCard2OutDeviceName=@FREEDV_COMPUTER_TO_SPEAKER_DEVICE@
soundCard2OutSampleRate=48000
SquelchActive=1
SquelchLevel=-4
fifoSize_ms=440
transmitLevel=0
snrSlow=0
mode=257
TxRxDelayMilliseconds=0
[Filter]
codec2LPCPostFilterGamma=50
codec2LPCPostFilterBeta=20
MicInBassFreqHz=100
MicInBassGaindB=0
MicInTrebleFreqHz=3000
MicInTrebleGaindB=0
MicInMidFreqHz=1500
MicInMidGaindB=0
MicInMidQ=100
MicInVolInDB=0
SpkOutBassFreqHz=100
SpkOutBassGaindB=0
SpkOutTrebleFreqHz=3000
SpkOutTrebleGaindB=0
SpkOutMidFreqHz=1500
SpkOutMidGaindB=0
SpkOutMidQ=100
SpkOutVolInDB=0
codec2LPCPostFilterEnable=1
codec2LPCPostFilterBassBoost=1
speexpp_enable=1
700C_EQ=1
[Filter/MicIn]
EQEnable=0
BassFreqHz=100
BassGaindB=0
TrebleFreqHz=3000
TrebleGaindB=0
MidFreqHz=1500
MidGaindB=0
MidQ=1
VolInDB=0
[Filter/SpkOut]
EQEnable=0
BassFreqHz=100
BassGaindB=0
TrebleFreqHz=3000
TrebleGaindB=0
MidFreqHz=1500
MidGaindB=0
MidQ=1
VolInDB=0
[Filter/codec2LPCPostFilter]
Gamma=50
Beta=20
[Hamlib]
UseForPTT=0
EnableFreqModeChanges=1
UseAnalogModes=0
IcomCIVHex=0
RigNameStr=ADAT www.adat.ch ADT-200A
PttType=0
SerialRate=0
SerialPort=
PttSerialPort=
RigName=0
[Rig]
UseSerialPTT=0
Port=
UseRTS=1
RTSPolarity=1
UseDTR=0
DTRPolarity=0
UseSerialPTTInput=0
PttInPort=
CTSPolarity=0
leftChannelVoxTone=0
EnableSpacebarForPTT=1
HalfDuplex=1
MultipleRx=1
SingleRxThread=1
[PSKReporter]
Enable=0
Callsign=
GridSquare=
FrequencyHzStr=0
[Data]
CallSign=
[Reporting]
Enable=1
Callsign=ZZ0ZZZ
GridSquare=ZZ12ZZ
FrequencyAsKHz=0
FrequencyList=1.9970,3.6250,3.6430,3.6930,3.6970,3.8500,5.4035,5.3665,5.3685,7.1770,7.1970,14.2360,14.2400,18.1180,21.3130,24.9330,28.3300,28.7200,10489.6400
ManualFrequencyReporting=1
DirectionAsCardinal=0
Frequency=14236000
[Reporting/PSKReporter]
Enable=1
[Reporting/FreeDV]
Enable=1
Hostname=qso.freedv.org
CurrentBandFilter=0
UseMetricDistances=1
BandFilterTracksFrequency=0
ForceReceiveOnly=0
StatusText=FreeDV Automated Test System - https://github.com/drowe67/freedv-gui
RecentStatusTexts=
TxRowBackgroundColor=#fc4500
TxRowForegroundColor=#000000
RxRowBackgroundColor=#379baf
RxRowForegroundColor=#000000
MsgRowBackgroundColor=#E58BE5
MsgRowForegroundColor=#000000
[Reporting/FreeDV/BandFilterTracking]
TracksFreqBand=1
TracksExactFreq=0
[CallsignList]
UseUTCTime=0
[FreeDV2020]
Allowed=0
[MainFrame]
left=26
top=23
width=800
height=780
rxNbookCtrl=0
TabLayout=
[Windows]
[Windows/AudioConfig]
left=26
top=23
width=918
height=739
[Windows/FreeDVReporter]
left=20
top=20
width=-1
height=-1
visible=0
currentSort=-1
currentSortDirection=1
reportingUserMsgColWidth=130
[File]
playFileToMicInPath=
recFileFromRadioPath=
recFileFromRadioSecs=60
recFileFromModulatorPath=
recFileFromModulatorSecs=60
playFileFromRadioPath=
[VoiceKeyer]
WaveFilePath=/home/mooneer/Documents
WaveFile=voicekeyer.wav
RxPause=10
Repeats=5
[FreeDV700]
txClip=1
txBPF=1
[Noise]
noise_snr=2
[Debug]
console=0
verbose=0
APIverbose=0
[Waterfall]
Color=0
[Stats]
ResetTime=10
[Plot]
[Plot/Spectrum]
CurrentAveraging=0
[Monitor]
VoiceKeyerAudio=0
TransmitAudio=0
VoiceKeyerAudioVol=0
TransmitAudioVol=0
[QuickRecord]
SavePath=/home/mooneer/Documents

View File

@ -65,15 +65,16 @@ VolInDB=0
Gamma=50
Beta=20
[Hamlib]
UseForPTT=0
EnableFreqModeChanges=1
UseForPTT=1
EnableFreqModeChanges=0
EnableFreqChangesOnly=0
UseAnalogModes=0
IcomCIVHex=0
RigNameStr=ADAT www.adat.ch ADT-200A
RigNameStr=Hamlib NET rigctl
PttType=0
SerialRate=0
SerialPort=
PttSerialPort=
SerialPort=localhost:4575
PttSerialPort=localhost:4575
RigName=0
[Rig]
UseSerialPTT=0

View File

@ -0,0 +1,417 @@
#!/usr/bin/python3
# This software is Copyright (C) 2012 by James C. Ahlstrom, and is
# licensed for use under the GNU General Public License (GPL).
# See http://www.opensource.org.
# Note that there is NO WARRANTY AT ALL. USE AT YOUR OWN RISK!!
# Henning Paul, DC4HP, ported this software to Python3 and added improvements in April, 2022.
# Thanks Henning!!!
# Modified by Mooneer Salem (4/2/2025) to support killing processes by PID
# when "radio" goes from TX to RX. This is to enable further testing of RADE EOO
# (namely ensuring that EOO is actually sent to the radio by the OS audio drivers
# before the radio goes back to receive).
import sys
import time
import socket
import traceback
import string
import os
import signal
_RIGCTL_PORT = 4532
# Choose which port to use here:
PORT = 4575
#PORT = _RIGCTL_PORT
# This module creates a Hamlib TCP server that implements the rigctl protocol. To start the server,
# run "python hamlibserver.py" from a command line. To exit the server, type control-C. Connect a
# client to the server using localhost and port 4575. The TCP server will imitate a software defined
# radio, and you can get and set the frequency, etc.
# Only the commands dump_state, freq, mode, ptt and vfo are implemented.
# This is not a real hardware server. It is meant as sample code to show how to implement the protocol
# in SDR control software. You can test it with "rigctl -m 2 -r localhost:4575".
#RIGCTLD_PROT_VER
#rig_model
#0
#rxstartf rxendf rxmodes rxlow_power rxhigh_power rxvfo rxant
#0 0 0 0 0 0 0
#txstartf txendf txmodes txlow_power txhigh_power txvfo txant
#0 0 0 0 0 0 0
#modes tuningsteps
#0 0
#modes bandwidth
#0 0
#max_rit
#max_xit
#max_ifshift
#0
#preamp1 preamp2
#attenuator1 attenuator2
#
#has_get_func
#has_set_func
#has_get_level
#has_set_level
#has_get_parm
#has_set_parm
#modes definitions:
# 0 AM -- Amplitude Modulation
# 1 CW -- CW "normal" sideband
# 2 USB -- Upper Side Band
# 3 LSB -- Lower Side Band
# 4 RTTY -- Radio Teletype
# 5 FM -- "narrow" band FM
# 6 WFM -- broadcast wide FM
# 7 CW "reverse" sideband
# 8 RTTY "reverse" sideband
# 9 AMS -- Amplitude Modulation Synchronous
#10 PKTLSB -- Packet/Digital LSB mode (dedicated port)
#11 PKTUSB -- Packet/Digital USB mode (dedicated port)
#12 PKTFM -- Packet/Digital FM mode (dedicated port)
#13 ECSSUSB -- Exalted Carrier Single Sideband USB
#14 ECSSLSB -- Exalted Carrier Single Sideband LSB
#15 FAX -- Facsimile Mode
#16 SAM -- Synchronous AM double sideband
#17 SAL -- Synchronous AM lower sideband
#18 SAH -- Synchronous AM upper (higher) sideband
#19 DSB -- Double sideband suppressed carrier
#21 FMN -- FM Narrow Kenwood ts990s
#22 PKTAM -- Packet/Digital AM mode e.g. IC7300
#23 P25 -- APCO/P25 VHF,UHF digital mode IC-R8600
#24 D-Star -- VHF,UHF digital mode IC-R8600
#25 dPMR -- digital PMR, VHF,UHF digital mode IC-R8600
#26 NXDN-VN -- VHF,UHF digital mode IC-R8600
#27 NXDN-N -- VHF,UHF digital mode IC-R8600
#28 DCR -- VHF,UHF digital mode IC-R8600
#29 AM-N -- Narrow band AM mode IC-R30
#30 PSK - Kenwood PSK and others
#31 PSKR - Kenwood PSKR and others
#32 DD Mode IC-9700
#33 Yaesu C4FM mode
#34 Yaesu DATA-FM-N
#35 Unfiltered as in PowerSDR
#36 CWN -- Narrow band CW (FT-736R)
#37 IQ mode for a couple of kit rigs
# A possible response to the "dump_state" request
dump1 = """ 2
2
2
150000.000000 1500000000.000000 0x1ff -1 -1 0x10000003 0x3
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0x1ff 1
0x1ff 0
0 0
0x1e 2400
0x2 500
0x1 8000
0x1 2400
0x20 15000
0x20 8000
0x40 230000
0 0
9990
9990
10000
0
10
10 20 30
0x3effffff
0x3effffff
0x7fffffff
0x7fffffff
0x7fffffff
0x7fffffff
"""
# Another possible response to the "dump_state" request
dump2 = """ 0
2
2
150000.000000 30000000.000000 0x900af -1 -1 0x10 000003 0x3
0 0 0 0 0 0 0
150000.000000 30000000.000000 0x900af -1 -1 0x10 000003 0x3
0 0 0 0 0 0 0
0 0
0 0
0
0
0
0
0x0
0x0
0x0
0x0
0x0
0
"""
class HamlibHandler:
"""This class is created for each connection to the server. It services requests from each client"""
SingleLetters = { # convert single-letter commands to long commands
'f':'freq',
'm':'mode',
't':'ptt',
'v':'vfo',
's':'split_vfo',
'i':'split_freq',
'x':'split_mode'
}
def __init__(self, app, sock, address, pid):
self.app = app # Reference back to the "hardware"
self.sock = sock
sock.settimeout(0.0)
self.address = address
self.pid = pid
self.received = b''
h = self.Handlers = {}
h[''] = self.ErrProtocol
h['dump_state'] = self.DumpState
h['get_freq'] = self.GetFreq
h['set_freq'] = self.SetFreq
h['get_mode'] = self.GetMode
h['set_mode'] = self.SetMode
h['get_vfo'] = self.GetVfo
h['set_vfo'] = self.SetVfo
h['get_ptt'] = self.GetPtt
h['set_ptt'] = self.SetPtt
h['get_split_vfo'] = self.GetSplitVfo
h['set_split_vfo'] = self.SetSplitVfo
h['get_split_freq'] = self.GetSplitFreq
h['set_split_freq'] = self.SetSplitFreq
h['get_split_mode'] = self.GetSplitMode
h['set_split_mode'] = self.SetSplitMode
def Send(self, text):
"""Send text back to the client."""
try:
self.sock.sendall(bytearray(text.encode()))
except socket.error:
self.sock.close()
self.sock = None
def Reply(self, *args): # args is name, value, name, value, ..., int
"""Create a string reply of name, value pairs, and an ending integer code."""
if self.extended: # Use extended format
t = "%s:" % self.cmd # Extended format echoes the command and parameters
for param in self.params:
t = "%s %s" % (t, param)
t += self.extended
for i in range(0, len(args) - 1, 2):
t = "%s%s: %s%c" % (t, args[i], args[i+1], self.extended)
t += "RPRT %d\n" % args[-1]
elif len(args) > 1: # Use simple format
t = ''
for i in range(1, len(args) - 1, 2):
t = "%s%s\n" % (t, args[i])
else: # No names; just the required integer code
t = "RPRT %d\n" % args[0]
print('Reply', t)
self.Send(t)
def ErrParam(self): # Invalid parameter
self.Reply(-1)
def UnImplemented(self): # Command not implemented
self.Reply(-4)
def ErrProtocol(self): # Protocol error
self.Reply(-8)
def Process(self):
"""This is the main processing loop, and is called frequently. It reads and satisfies requests."""
if not self.sock:
return 0
try: # Read any data from the socket
text = self.sock.recv(1024)
except socket.timeout: # This does not work
pass
except socket.error: # Nothing to read
pass
else: # We got some characters
self.received += text
if b'\n' in self.received: # A complete command ending with newline is available
cmd, self.received = self.received.split(b'\n', 1) # Split off the command, save any further characters
else:
return 1
cmd = cmd.decode()
cmd = cmd.strip() # Here is our command
print('Get', cmd)
if not cmd: # ??? Indicates a closed connection?
print('empty command')
self.sock.close()
self.sock = None
return 0
# Parse the command and call the appropriate handler
if cmd[0] == '+': # rigctld Extended Response Protocol
self.extended = '\n'
cmd = cmd[1:].strip()
elif cmd[0] in ';|,': # rigctld Extended Response Protocol
self.extended = cmd[0]
cmd = cmd[1:].strip()
else:
self.extended = None
if cmd[0:1] == '\\': # long form command starting with backslash
args = cmd[1:].split()
self.cmd = args[0]
self.params = args[1:]
self.Handlers.get(self.cmd, self.UnImplemented)()
else: # single-letter command
self.params = cmd[1:].strip()
cmd = cmd[0:1]
try:
t = self.SingleLetters[cmd.lower()]
except KeyError:
self.UnImplemented()
else:
if cmd in string.ascii_uppercase:
self.cmd = 'set_' + t
else:
self.cmd = 'get_' + t
self.Handlers.get(self.cmd, self.UnImplemented)()
return 1
# These are the handlers for each request
def DumpState(self):
self.Send(dump2)
def GetFreq(self):
self.Reply('Frequency', self.app.freq, 0)
def SetFreq(self):
try:
x = float(self.params)
self.Reply(0)
except:
self.ErrParam()
else:
x = int(x + 0.5)
self.app.freq = x
def GetMode(self):
self.Reply('Mode', self.app.mode, 'Passband', self.app.bandwidth, 0)
def SetMode(self):
try:
mode, bw = self.params.split()
bw = int(float(bw) + 0.5)
self.Reply(0)
except:
self.ErrParam()
else:
self.app.mode = mode
self.app.bandwidth = bw
def GetVfo(self):
self.Reply('VFO', self.app.vfo, 0)
def SetVfo(self):
try:
x = self.params.upper()
self.Reply(0)
except:
self.ErrParam()
else:
self.app.vfo = x
def GetPtt(self):
self.Reply('PTT', self.app.ptt, 0)
def SetPtt(self):
try:
x = int(self.params)
self.Reply(0)
except:
self.ErrParam()
else:
if (not x) and self.app.ptt:
# Sleep for 20ms to match typical SDR behavior.
# Example: Flex 6000/8000 (https://community.flexradio.com/discussion/8028104/question-regarding-tx-delay)
time.sleep(20 / 1000)
os.kill(self.pid, signal.SIGTERM)
if x:
self.app.ptt = 1
else:
self.app.ptt = 0
def GetSplitVfo(self):
self.Reply('SPLIT', self.app.splitenable, 'TXVFO', self.app.txvfo, 0)
def SetSplitVfo(self):
try:
splitenable, txvfo = self.params.split()
self.Reply(0)
except:
self.ErrParam()
else:
self.app.splitenable = splitenable
self.app.txvfo = txvfo
def GetSplitFreq(self):
self.Reply('TX Frequency', self.app.txfreq, 0)
def SetSplitFreq(self):
try:
x = float(self.params)
self.Reply(0)
except:
self.ErrParam()
else:
x = int(x + 0.5)
self.app.txfreq = x
def GetSplitMode(self):
self.Reply('TX Mode', self.app.txmode, 'TX Passband', self.app.txbandwidth, 0)
def SetSplitMode(self):
try:
mode, bw = self.params.split()
bw = int(float(bw) + 0.5)
self.Reply(0)
except:
self.ErrParam()
else:
self.app.txmode = mode
self.app.txbandwidth = bw
class App:
"""This is the main application class. It listens for connectons from clients and creates a server for each one."""
def __init__(self, pid):
self.pid = pid
self.hamlib_clients = []
self.hamlib_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
self.hamlib_socket.bind(('localhost', PORT))
except socket.error:
print("could not open listening socket")
sys.exit(-1)
self.hamlib_socket.settimeout(0.0)
self.hamlib_socket.listen(0)
# This is the state of the "hardware"
self.freq = 21200500
self.mode = 'USB'
self.bandwidth = 2400
self.vfo = "VFO"
self.ptt = 0
self.splitenable = 0
self.txvfo = 'VFO'
self.txfreq = 21200500
self.txmode = 'USB'
self.txbandwidth = 2400
def Run(self):
while 1:
time.sleep(0.01)
try:
conn, address = self.hamlib_socket.accept()
except socket.error:
pass
else:
print('Connection from', address)
self.hamlib_clients.append(HamlibHandler(self, conn, address, self.pid))
for client in self.hamlib_clients:
ret = client.Process()
if not ret: # False return indicates a closed connection; remove the server
self.hamlib_clients.remove(client)
print('Remove', client.address)
break
if __name__ == "__main__":
try:
if len(sys.argv) != 2:
raise RuntimeError("A PID for the process to kill on TX->RX is required")
App(int(sys.argv[1])).Run()
except KeyboardInterrupt:
sys.exit(0)

View File

@ -30,7 +30,12 @@ if [ "$OPERATING_SYSTEM" == "Linux" ]; then
fi
# Determine correct record device to retrieve TX data
FREEDV_CONF_FILE=freedv-ctest-reporting.conf
if [ "$2" == "mpp" ]; then
FREEDV_CONF_FILE=freedv-ctest-reporting-mpp.conf
else
FREEDV_CONF_FILE=freedv-ctest-reporting.conf
fi
if [ "$OPERATING_SYSTEM" == "Linux" ]; then
REC_DEVICE="$FREEDV_COMPUTER_TO_RADIO_DEVICE.monitor"
else
@ -61,17 +66,21 @@ mv $(pwd)/$FREEDV_CONF_FILE.tmp $(pwd)/$FREEDV_CONF_FILE
if [ "$OPERATING_SYSTEM" == "Linux" ]; then
parecord --channels=1 --rate 8000 --file-format=wav --device "$REC_DEVICE" --latency 1 test.wav &
else
sox -t $SOX_DRIVER "$REC_DEVICE" -c 1 -r 8000 -t wav test.wav &
sox -t $SOX_DRIVER "$REC_DEVICE" -c 1 -r 8000 -t wav test.wav >/dev/null 2>&1 &
fi
RECORD_PID=$!
# Start "radio"
python3 $SCRIPTPATH/hamlibserver.py $RECORD_PID &
RADIO_PID=$!
# Start FreeDV in test mode to record TX
if [ "$2" == "mpp" ]; then
TX_ARGS="-txtime 1 -txattempts 6 "
else
TX_ARGS="-txtime 5 "
fi
$FREEDV_BINARY -f $(pwd)/$FREEDV_CONF_FILE -ut tx -utmode RADE $TX_ARGS >tmp.log 2>&1
$FREEDV_BINARY -f $(pwd)/$FREEDV_CONF_FILE -ut tx -utmode RADE $TX_ARGS
FDV_PID=$!
#sleep 30
@ -108,3 +117,6 @@ if [ "$OPERATING_SYSTEM" == "Linux" ]; then
pactl unload-module $DRIVER_INDEX_FREEDV_COMPUTER_TO_RADIO
pactl unload-module $DRIVER_INDEX_FREEDV_MICROPHONE_TO_COMPUTER
fi
# End radio process as it's no longer needed
kill $RADIO_PID