Lỗi failed to create xaudio2 interface windows 10 năm 2024
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Show IXAudio2::CreateSubmixVoice method (xaudio2.h)
In this articleCreates and configures a submix voice. Syntax
Parameters
On success, returns a pointer to the new IXAudio2SubmixVoice object.
Number of channels in the input audio data of the submix voice. InputChannels must be less than or equal to XAUDIO2_MAX_AUDIO_CHANNELS.
Sample rate of the input audio data of submix voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between XAUDIO2_MIN_SAMPLE_RATE and XAUDIO2_MAX_SAMPLE_RATE.
Flags that specify the behavior of the submix voice. It can be 0 or the following: Value Description XAUDIO2_VOICE_USEFILTER The filter effect should be available on this voice.
An arbitrary number that specifies when this voice is processed with respect to other submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices that include a smaller ProcessingStage value and before all other voices that include a larger ProcessingStage value. Voices that include the same ProcessingStage value are processed in any order. A submix voice cannot send to another submix voice with a lower or equal ProcessingStage value. This prevents audio being lost due to a submix cycle.
Pointer to a list of XAUDIO2_VOICE_SENDS structures that describe the set of destination voices for the submix voice. If pSendList is NULL, the send list will default to a single output to the first mastering voice created.
Pointer to a list of XAUDIO2_EFFECT_CHAIN structures that describe an effect chain to use in the submix voice. Return valueReturns S_OK if successful; otherwise, an error code. See XAudio2 Error Codes for descriptions of XAudio2 specific error codes. RemarksSubmix voices receive the output of one or more source or submix voices. They process the output, and then send it to another submix voice or to a mastering voice. A submix voice performs a sample rate conversion from the input sample rate to the input rate of its output voices in pSendList. If you specify multiple voice sends, they must all have the input same sample rate. You cannot create any source or submix voices until a mastering voice exists, and you cannot destroy a mastering voice if any source or submix voices still exist. When first created, submix voices are in the started state. XAudio2 uses an internal memory pooler for voices with the same format. This means that memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed). It is invalid to call CreateSubmixVoice from within a callback (that is, IXAudio2EngineCallback or IXAudio2VoiceCallback). If you call CreateSubmixVoice within a callback, it returns XAUDIO2_E_INVALID_CALL. The XAUDIO2_EFFECT_CHAIN that is passed in as the pEffectChain argument and any XAUDIO2_EFFECT_DESCRIPTOR information contained within it are no longer needed after CreateSubmixVoice successfully completes, and may be deleted immediately after CreateSubmixVoice is called. This class represents an XAudio2 audio graph, device, and mastering voice. It manages XAudio2 object creation, audio engine per-frame updates, the mastering voice, and all source voices. The class optionally supports applying a mastering limiter and/or environmental reverb xAPO effects typically used in conjunction with 3D positional audio. Related tutorial: Adding audio to your project HeaderInitializationThis creates an XAudio2 interface, an XAudio2 mastering voice, and other global resources. XAudio2 requires COM be initialized as a prerequisite using std::unique_ptr 1, std::unique_ptr 2, std::unique_ptr 3, or std::unique_ptr 4. // This is only needed in Win32 desktop apps hr = CoInitializeEx( nullptr, COINIT_MULTITHREADED ); All DirectXTK for Audio components require an AudioEngine instance. For exception safety, it is recommended you make use of the C++ RAII pattern and use a std::unique_ptr 5. std::unique_ptr The constructor optionally takes a number of parameters: AudioEngine( AUDIO_ENGINE_FLAGS flags = AudioEngine_Default,
Parametersflags: A combination of audio engine flags. Defaults to std::unique_ptr 6.
wfx: Specifies the output format for the XAudio2 mastering voice (primarily the channel count and sample rate). If null, it uses the device defaults. See Wave Formats. deviceId: Specifies the output debug for the XAudio2 mastering voice. When using XAudio 2.8 or 2.9, this must be a WinRT device identifier, while on XAudio 2.7 this is a WASAPI audio end-point identifier. If null, it uses the default audio device. The XAudio2 Redistributable supports either WASAPI or WinRT device identifiers, as does XAudio 2.9 on Windows 10 (May 2019 Update; Version 1903; Build 18362) or later. category: Specifies the audio end-point category for the XAudio2 mastering voice. On XAudio 2.7, this value is ignored. Debugging facilitiesWhen creating the AudioEngine, you can specify enabling of XAudio2 debugging facilities by including the std::unique_ptr 7 flag. AUDIO_ENGINE_FLAGS eflags = AudioEngine_Default; ifdef _DEBUGeflags |= AudioEngine_Debug; endifaudEngine = std::make_unique XAudio2 Debugging Facilities With XAudio 2.7, the developer runtime must be installed on the system for the AudioEngine_Debug case to succeed. With XAudio 2.8 on Windows 8.x or Windows 10, the debug support is built into the OS. Device enumerationAudioEngine provides a simple wrapper over the platform-specific audio device enumeration: auto enumList = AudioEngine::GetRendererDetails(); if ( enumList.empty() ) { }
else
{
for(const auto& it : enumList)
{ }
}Xbox One: The enumeration only returns the 'default' audio device identifier with the description "Default" Silent modeIf the initial device creation fails to find an audio device, by default AudioEngine creation will succeed and be in a 'silent' mode--if AudioEngine( AUDIO_ENGINE_FLAGS flags = AudioEngine_Default,
3 is given, then it will throw a C++ exception instead. This allows various DirectXTK for Audio objects to be created and methods called, but no audio processing will take place. This can be detected either by a 'false' return from Update or calling IsAudioDevicePresent.At some later point (typically after detecting a new audio device is available on the system), you can retry by calling Reset. If this returns 'true', then the AudioEngine is no longer in 'silent' mode. After that, future calls to SoundEffect or WaveBank's Play() will result in one-shots being fired. All SoundEffectInstances are in a stopped state, but can be started after the successful call to AudioEngine( AUDIO_ENGINE_FLAGS flags = AudioEngine_Default,
7.bool Reset( const WAVEFORMATEX* wfx = nullptr, const wchar_t* deviceId = nullptr ); Parameters to Reset are the similar as for the AudioEngine constructor. If the original object was created with AudioEngine( AUDIO_ENGINE_FLAGS flags = AudioEngine_Default,
3, then AudioEngine( AUDIO_ENGINE_FLAGS flags = AudioEngine_Default,
7 will throw if no default audio device is found.For XAudio 2.7 and XAudio 2.8 (Windows 8.x), this logic also handles the 'lost endpoint' scenario that happens if you unplug speakers or headphones. With XAudio 2.9 (Windows 10), if you use a Win32 desktop applicationsFor Win32 desktop applications, you can be informed of new audio devices in your application with RegisterDeviceNotification: include ... DEV_BROADCAST_DEVICEINTERFACE filter = {}; filter.dbcc_size = sizeof( filter ); filter.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE; filter.dbcc_classguid = KSCATEGORY_AUDIO; HDEVNOTIFY hNewAudio = RegisterDeviceNotification( hwnd, &filter, ...
case WM_DEVICECHANGE:
Or you can make use of IMMNotificationClientUniversal Windows Platform appsYou can be informed of new audio devices by using the WinRT class DeviceWatcher in the AUDIO_ENGINE_FLAGS eflags = AudioEngine_Default; ifdef _DEBUGeflags |= AudioEngine_Debug; endifaudEngine = std::make_unique 2 namespace. Device Enumeration Sample DeviceEnumerationAndPairing Per-frame processingUpdate should be called often, usually in a per-frame update. This can be done on the main rendering thread, or from a worker thread. This returns false if the audio engine is the 'silent' mode. if ( !audEngine->Update() ) { }Note that if XAudio2 encounters a critical error (typically because the current audio device is disconnected or on the Windows platform if the speakers are unplugged from the current audio device), then the audio engine will automatically be in 'silent' mode. This can be detected either by a 'false' return from Update or calling IsCriticalError. If this occurs, you should try calling Reset to try the new 'default' device' if there is one. If that retry fails, you should wait until a new audio device is available (ala 'silent' mode above). PausingTypically games will suspend audio when paused or the game is 'tabbed away' from. Suspend and Resume can be used to globally halt/restart audio processing. Positional 3D audioAudioEngine by default supports both standard and positional 3D audio voices, but without environmental reverb. If std::unique_ptr 8 is given, then a reverb effect is created and used for all 3D audio voices--it is not applied to standard voices. You can use SetReverb with I3DL2 presets or provide 'native' reverb settings. audEngine->SetReverb( Reverb_Quarry ); ... audEngine->SetReverb( Reverb_Off ); Optionally std::unique_ptr 9 can be used with std::unique_ptr 8 to provide some additional reverb/occlusion effects. Voice management
ifdef _DEBUGeflags |= AudioEngine_Debug; endifaudEngine = std::make_unique
AUDIO_ENGINE_FLAGS eflags = AudioEngine_Default; ifdef _DEBUGeflags |= AudioEngine_Debug; endifaudEngine = std::make_unique 8 is called twice with some delay between them before calling AUDIO_ENGINE_FLAGS eflags = AudioEngine_Default; ifdef _DEBUGeflags |= AudioEngine_Debug; endifaudEngine = std::make_unique
Mastering volume limiterThe XAudio2 audio renderer makes use of single-precision floating-point values, which can exceed the range of the audio hardware, particularly after 3D positional audio computations, custom xAPOs, or volume settings greater than 1.0. This can produce distortion or other clipping artifacts in the final output. Therefore, DirectXTK for Audio supports attaching a mastering volume limiter xAPO to the mastering voice by setting the AudioEngine( AUDIO_ENGINE_FLAGS flags = AudioEngine_Default,
1 flag. It uses default settings, which can be modified by calling SetMasteringLimit( release, loudness ).See FXMASTERINGLIMITER_PARAMETERS AUDIO_ENGINE_FLAGS eflags = AudioEngine_UseMasteringLimiter; ifdef _DEBUGeflags = eflags | AudioEngine_Debug; endifaudEngine = std::make_unique StatisticsThe GetStatistics function returns information on the number of playing sounds, allocated instances, audio bytes in loaded SoundEffect and WaveBank objects, size of buffers used for streaming in SoundStreamInstance objects, and XAudio2 source voices allocated for various purposes. These values do not rely on debug faculties to be enabled. std::unique_ptr 0 Properties
Low-level interface accessThe following methods can be used to obtain the low-level XAudio2 interface objects used by DirectXTK for Audio. Be sure to also use RegisterNotify, UnregisterNotify, and the auto enumList = AudioEngine::GetRendererDetails(); if ( enumList.empty() ) { }
else
{
for(const auto& it : enumList)
{ }
}2 interface to get proper notifications from the XAudio2 engine. You should prefer to use AllocateVoice to creating your own source voices if possible, and be sure to use DestroyVoice to free the source voice if it is not a 'oneshot' voice. One shot voices are managed and cleaned up by auto enumList = AudioEngine::GetRendererDetails(); if ( enumList.empty() ) { }
else
{
for(const auto& it : enumList)
{ }
}3. |