Android WebRTC 对 AudioRecord 的使用

AudioRecord 是 Android 基于原始PCM音频数据录制的类,WebRCT 对其封装的代码位置位于
org/webrtc/audio/WebRtcAudioRecord.JAVA,接下来我们学习一下 AudioRecord 是如何创建启动,读取音频采集数据以及销毁等功能的 。
创建和初始化private int initRecording(int sampleRate, int channels) {Logging.d(TAG, "initRecording(sampleRate=" + sampleRate + ", channels=" + channels + ")");if (audioRecord != null) {reportWebRtcAudioRecordInitError("InitRecording called twice without StopRecording.");return -1;}final int bytesPerFrame = channels * (BITS_PER_SAMPLE / 8);final int framesPerBuffer = sampleRate / BUFFERS_PER_SECOND;byteBuffer = ByteBuffer.allocateDirect(bytesPerFrame * framesPerBuffer);Logging.d(TAG, "byteBuffer.capacity: " + byteBuffer.capacity());emptyBytes = new byte[byteBuffer.capacity()];// Rather than passing the ByteBuffer with every callback (requiring// the potentially expensive GetDirectBufferAddress) we simply have the// the native class cache the address to the memory once.nativeCacheDirectBufferAddress(byteBuffer, nativeAudioRecord);// Get the minimum buffer size required for the successful creation of// an AudioRecord object, in byte units.// Note that this size doesn't guarantee a smooth recording under load.final int channelConfig = channelCountToConfiguration(channels);int minBufferSize =AudioRecord.getMinBufferSize(sampleRate, channelConfig, AudioFormat.ENCODING_PCM_16BIT);if (minBufferSize == AudioRecord.ERROR || minBufferSize == AudioRecord.ERROR_BAD_VALUE) {reportWebRtcAudioRecordInitError("AudioRecord.getMinBufferSize failed: " + minBufferSize);return -1;}Logging.d(TAG, "AudioRecord.getMinBufferSize: " + minBufferSize);// Use a larger buffer size than the minimum required when creating the// AudioRecord instance to ensure smooth recording under load. It has been// verified that it does not increase the actual recording latency.int bufferSizeInBytes = Math.max(BUFFER_SIZE_FACTOR * minBufferSize, byteBuffer.capacity());Logging.d(TAG, "bufferSizeInBytes: " + bufferSizeInBytes);try {audioRecord = new AudioRecord(audIOSource, sampleRate, channelConfig,AudioFormat.ENCODING_PCM_16BIT, bufferSizeInBytes);} catch (IllegalArgumentException e) {reportWebRtcAudioRecordInitError("AudioRecord ctor error: " + e.getMessage());releaseAudioResources();return -1;}if (audioRecord == null || audioRecord.getState() != AudioRecord.STATE_INITIALIZED) {reportWebRtcAudioRecordInitError("Failed to create a new AudioRecord instance");releaseAudioResources();return -1;}if (effects != null) {effects.enable(audioRecord.getAudioSessionId());}logMainParameters();logMainParametersExtended();return framesPerBuffer;}在初始化的方法中,主要做了两件事 。

  • 创建缓冲区
  1. 由于实际使用数据的代码在native层,因此这里创建了一个Java的direct buffer,而且AudioRecord也有通过ByteBuffer读数据的接口,并且实际把数据复制到ByteBuffer的代码也在native层,所以这里使用direct buffer效率会更高 。
  2. ByteBuffer的容量为单次读取数据的大小 。Android的数据格式是打包格式(packed),在多个声道时,同一个样点的不同声道连续存储在一起,接着存储下一个样点的不同声道;一帧就是一个样点的所有声道数据的合集,一次读取的帧数是10ms的样点数(采样率除以100,样点个数等于采样率时对应于1s的数据,所以除以100就是10ms的数据);ByteBuffer的容量为帧数乘声道数乘每个样点的字节数(PCM 16 bit表示每个样点为两个字节) 。
  3. 这里调用的nativeCacheDirectBufferAddress JNI函数会在native层把ByteBuffer的访问地址提前保存下来,避免每次读到音频数据后,还需要调用接口获取访问地址 。
  • 创建 AudioRecord对象,构造函数有很多参数,分析如下
  1. audioSource指的是音频采集模式,默认是 VOICE_COMMUNICATION,该模式会使用硬件AEC(回声抑制)
  2. sampleRate采样率
  3. channelConfig声道数
  4. audioFormat音频数据格式,这里实际使用的是 AudioFormat.ENCODING_PCM_16BIT,即PCM 16 bit的数据格式 。
  5. bufferSize系统创建AudioRecord时使用的缓冲区大小,这里使用了两个数值的较大者:通过AudioRecord.getMinBufferSize接口获取的最小缓冲区大小的两倍,读取数据的ByteBuffer的容量 。通过注释我们可以了解到,考虑最小缓冲区的两倍是为了确保系统负载较高的情况下音频采集仍能平稳运行,而且这里设置更大的缓冲区并不会增加音频采集的延迟 。
启动private boolean startRecording() {Logging.d(TAG, "startRecording");assertTrue(audioRecord != null);assertTrue(audioThread == null);try {audioRecord.startRecording();} catch (IllegalStateException e) {reportWebRtcAudioRecordStartError(AudioRecordStartErrorCode.AUDIO_RECORD_START_EXCEPTION,"AudioRecord.startRecording failed: " + e.getMessage());return false;}if (audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {reportWebRtcAudioRecordStartError(AudioRecordStartErrorCode.AUDIO_RECORD_START_STATE_MISMATCH,"AudioRecord.startRecording failed - incorrect state :"+ audioRecord.getRecordingState());return false;}audioThread = new AudioRecordThread("AudioRecordJavaThread");audioThread.start();return true;}


推荐阅读