EzDev.org

EZAudio

EZAudio - An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.


Audio input source in EZAudio

I am using EZAudio to record an audio and plot its audio graph.

How can I record an audio with headphone's mic and with iPhone's microphone simultaneously ?

I want to take input from both the input sources. How can I achieve this by using EZAudio.

Thanks in advance.


Source: (StackOverflow)

EZAudio CocoaPods module import error

When adding EZAudio to my swift project using CocoaPods, I get a compiler error that says:
Could not build Objective-C module 'EZAudio'

My Podfile is this:

platform :ios, '9'
use_frameworks!

pod 'CorePlot'
pod 'SWRevealViewController'
pod 'EZAudio'

I add it to a swift file like so:

import EZAudio

I am not using a bridging header. Does anybody have any insight into why this is happening?


Source: (StackOverflow)

iOS AudioKit / EZAudio FFT values

I have a Web Audio API based signal process application and I have to port it to iOS with AudioKit(based on EZAudio) framework.

I need only the FrequencyDomain that is contains numbers between 0-255 in Web Audio API.

But in the AudioKit the AKFFTTap fftData gives me back floats between -6 to 6 and sometimes <1000.

Here is what I have already tried on ios:

init process ...

let mic = AKMicrophone()
let fftTap = AKFFTTap.init(mic)

request ...

return fftTap.fftData

on Web Audio API: init...

var analyser = audioContext.createAnalyser();

request...

let freqDomain = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqDomain);
return freqDomain

How can I get back the same data?


Source: (StackOverflow)

Can someone explain how this code converts volume to decibels using the Accelerate Framework?

I'm building an iOS app using EZAudio. It's delegate returns back a float** buffer, which contains float values indicating the volume detected. This delegate is called constantly and it's work is done a different thread.

What I am trying to do is to take the float value from EZAudio and convert it into decibels.


EZAudioDelegate

Here's my simplified EZAudio Delegate for getting Microphone Data:

- (void)microphone:(EZMicrophone *)microphone hasAudioReceived:(float **)buffer withBufferSize:(UInt32)bufferSize withNumberOfChannels:(UInt32)numberOfChannels {
    /*
     *  Returns a float array called buffer that contains the stereo signal data
     *  buffer[0] is the left audio channel
     *  buffer[1] is the right audio channel
     */

    // Using a separate audio thread to not block the main UI thread
    dispatch_async(dispatch_get_main_queue(), ^{

        float decibels = [self getDecibelsFromVolume:buffer withBufferSize:bufferSize];

        NSLog(@"Decibels: %f", decibels);

    });

}

The Problem

The problem is that after implementing solutions from the links below, I do not understand how it works. If someone could explain how it converts volume to decibels I would be very grateful


The Code

The solution uses the following methods from the Accelerate Framework to convert the volume into decibels:

Below is the method getDecibelsFromVolume that is called from the EZAudio Delegate. It is passed the float** buffer and bufferSize from the delegate.

- (float)getDecibelsFromVolume:(float**)buffer withBufferSize:(UInt32)bufferSize {

    // Decibel Calculation.

    float one = 1.0;
    float meanVal = 0.0;
    float tiny = 0.1;
    float lastdbValue = 0.0;

    vDSP_vsq(buffer[0], 1, buffer[0], 1, bufferSize);

    vDSP_meanv(buffer[0], 1, &meanVal, bufferSize);

    vDSP_vdbcon(&meanVal, 1, &one, &meanVal, 1, 1, 0);


    // Exponential moving average to dB level to only get continous sounds.

    float currentdb = 1.0 - (fabs(meanVal) / 100);

    if (lastdbValue == INFINITY || lastdbValue == -INFINITY || isnan(lastdbValue)) {
        lastdbValue = 0.0;
    }

    float dbValue = ((1.0 - tiny) * lastdbValue) + tiny * currentdb;

    lastdbValue = dbValue;

    return dbValue;
}

Source: (StackOverflow)

How can I trim an audio recorded with EZAudioRecorder?

I want to trim an audio which is recorded with EZAudioRecorder.

I am writing this code to trim an audio. This is working fine for audio recorded with AVAudioRecorder but it triggers error block with EZAudioRecorder, with an error couldn't open file.

-(BOOL)trimAudiofile{
   float audioStartTime=1.0;
   float audioEndTime=2.0;//define end time of audio
   NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
   [dateFormatter setDateFormat:@"yyyy-MM-dd_HH-mm-ss"];
   NSArray *paths = NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES);
   NSString *libraryCachesDirectory = [paths objectAtIndex:0];
   libraryCachesDirectory = [libraryCachesDirectory stringByAppendingPathComponent:@"Caches"];
   NSString *OutputFilePath = [libraryCachesDirectory stringByAppendingFormat:@"/output_%@.m4a", [dateFormatter stringFromDate:[NSDate date]]];
   NSURL *audioFileOutput = [NSURL fileURLWithPath:OutputFilePath];
   NSURL *audioFileInput=[self testFilePathURL];//<Path of orignal audio file>

   if (!audioFileInput || !audioFileOutput)
   {
       return NO;
   }

  [[NSFileManager defaultManager] removeItemAtURL:audioFileOutput error:NULL];
  AVAsset *asset = [AVAsset assetWithURL:audioFileInput];

  AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:asset
                                                                    presetName:AVAssetExportPresetAppleM4A];
 if (exportSession == nil)
 {
     return NO;
 }
 CMTime startTime = CMTimeMake((int)(floor(audioStartTime * 100)), 100);
 CMTime stopTime = CMTimeMake((int)(ceil(audioEndTime * 100)), 100);
 CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, stopTime);

 exportSession.outputURL = audioFileOutput;
 exportSession.timeRange = exportTimeRange;
 exportSession.outputFileType = AVFileTypeAppleM4A;

[exportSession exportAsynchronouslyWithCompletionHandler:^
{
   if (AVAssetExportSessionStatusCompleted == exportSession.status)
   {
     NSLog(@"Export OK");
   }
   else if (AVAssetExportSessionStatusFailed == exportSession.status)
   {
     NSLog(@"Export failed: %@", [[exportSession error] localizedDescription]);
   }
 }];
 return YES;
}

Note:- The audio file exists in document directory and EZAudioPlayer is also able to play this file.

Can anyone tell me where am I doing wrong ? Any help on this will be appreciated.

Thanks in advance.


Source: (StackOverflow)

External clock source with usb audio device in iOS

My app uses an external USB microphone with a very accurate thermo compensated quartz oscillator (TCXO). The sample rate is 48KHz. I plug it in iOS through the camera kit connector. I'm using EZAudio library and everything works fine except that iOS seems to keep its own internal clock source for audio sampling instead of my accurate 48KHz.

I read all the documentation about CoreAudio but I didn't find anything related to clock source when using lightning audio.

Is there any way to choose between internal or external clock source ?

Thanks !

var audioFormatIn = AudioStreamBasicDescription(mSampleRate: Float64(48000),
                                                mFormatID: AudioFormatID(kAudioFormatLinearPCM),
                                                mFormatFlags: AudioFormatFlags(kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked),
                                                mBytesPerPacket: 2,
                                                mFramesPerPacket: 1,
                                                mBytesPerFrame: 2,
                                                mChannelsPerFrame: 1,
                                                mBitsPerChannel: 16,
                                                mReserved: 0)

func initAudio()
{       
    let session : AVAudioSession = AVAudioSession.sharedInstance()
    do {
        try session.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try session.setMode(AVAudioSessionModeMeasurement)
        try session.setActive(true)
    }
    catch {
        print("Error Audio")
    }
    self.microphone = EZMicrophone(microphoneDelegate: self, withAudioStreamBasicDescription: audioFormatIn)
}

UPDATE : Thanks to @Rhythmic Fistman, setting the preferred sample rate partly solved the problem. No more resampling from iOS and the TCXO remains the clock master source. But the signal is now quickly corrupted with something that seems to be empty samples in the buffers. This problem is getting worse and worse with the recording length. Of course, as I need the lightning port to plug the hardware, it's really difficult for me to debug easily!

Screenshot of the waveform after 7 minutes :

enter image description here

Screenshot of the waveform after 15 minutes :

enter image description here


Source: (StackOverflow)

EZAudio doesn't work with Bluetooth devices

I am using EZAudio to playback streaming audio data. Here is the graph: AUConverter -> MultiChannelMixer -> Output. The converter is configured such that it converts audio data with a sampling rate of 48000 to device sampling rate (normally 44100). The audio data will be written into a converter node

AURenderCallbackStruct converterCallback;
    converterCallback.inputProc = EZOutputConverterInputCallback;
    converterCallback.inputProcRefCon = (__bridge void *)(self);
    [EZAudioUtilities checkResult:AUGraphSetNodeInputCallback(self.info->graph,
                                                              self.info->converterNodeInfo.node,
                                                              0,
                                                              &converterCallback)
                        operation:"Failed to set render callback on converter node"];

This graph works well with iphone's speakers. But when I select a bluetooth device, the callback is no longer triggered and no audio is played. If I remove the converter node, I can play the audio again with a bluetooth device, but the sound quality is terrible. Please help, what am I missing in order to play audio in a bluetooth device.

Thanks.


Source: (StackOverflow)

Play audio file using EZAudio Framework in Swift

Hi I am trying to convert following Objective-C code to Swift:

EZAudioFile *audioFile = [EZAudioFile audioFileWithURL:NSURL]; //required type NSURL
[self.player playAudioFile:audioFile];

But I am unable to make it work.

let audioFile = EZAudioFile.url(EZAudioFile) //required type EZAudioFile, so I am unable to pass the NSURL of the audio file here.
player.play()

Error is : Cannot convert value of type 'NSURL' to expected argument type 'EZAudioFile'

The above objective-C code is referenced from here : EZAudio Example:Record File


Source: (StackOverflow)

Save Audio File after apply Filter(AVAudioUnitEQ) and finally save it as mp3? [duplicate]

This question already has an answer here:

I am so Frustrating to save the Audio File After Apply the Filter. The filter only applicable for the AVAudioPlayerNode which mean during Playing in the player the filter is perfect to apply but how can i save this ? I am so struggle last 3 days, Please any help will be appreciate. These below codes are i used,

engine = [[AVAudioEngine alloc] init];

 - (void)setupEQ
{
      NSLog(@"setupEQ");

      unitEq = [[AVAudioUnitEQ alloc] initWithNumberOfBands:12];
      unitEq.globalGain = 3.0;
      AVAudioUnitEQFilterParameters *filterParameters;
      filterParameters = unitEq.bands[0];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 31;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_31.5hz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[1];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 63;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_63hz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[2];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 125;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_125hz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[3];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 250;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_250hz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[4];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 500;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_500hz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[5];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 1000;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_1khz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[6];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 2000;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_2khz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[7];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 4000;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain =[[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_4khz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[8];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 8000;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain = [[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_8khz"] floatValue];
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[9];
      filterParameters.filterType = AVAudioUnitEQFilterTypeParametric;
      filterParameters.frequency = 16000;
      filterParameters.bandwidth = 1.0;
      filterParameters.gain =[[[filter_arr objectAtIndex:index_filter] objectForKey:@"filter_freq_16khz"] floatValue];
      filterParameters.bypass = FALSE;


      filterParameters = unitEq.bands[10];
      filterParameters.filterType = AVAudioUnitEQFilterTypeLowPass;
      filterParameters.frequency = 16857;
      filterParameters.bypass = FALSE;

      filterParameters = unitEq.bands[11];
      filterParameters.filterType = AVAudioUnitEQFilterTypeHighPass;
      filterParameters.frequency = 205;
      filterParameters.bypass = FALSE;

      [engine attachNode:unitEq];
}







audioFile = [[AVAudioFile alloc] initForReading:[NSURL fileURLWithPath:[NSString stringWithFormat:@"%@",[self audio_file_path:mp3FileName]]] error:&playerError];
            [self setupPlayer];
            [self setupEQ];

            AVAudioMixerNode *mixerNode = [engine mainMixerNode];
            [engine connect:audio_player_node to:unitEq format:audio_file.processingFormat];
            [engine connect:unitEq to:mixerNode format:audio_file.processingFormat];

These are samples it works fine. But how can i store the audio file with this same frequencies? and the audio file should be in mp3 format. In apple they only target the saving file as m4a.


Source: (StackOverflow)

EZAudio FFT data - what is the range of output floats?

I'm using the EZAudio project which is located at:

https://github.com/syedhali/EZAudio

I am trying to create an app that outputs a pattern based on the FFT of the audio source. Basically as the audio plays time is along the x axis and the FFT is along the y-axis. As the music plays a pattern is created of squares and the colour of the square will be dictated by the amplitude of the frequency in that FFT sub-region.

The code looks like this:

//------------------------------------------------------------------------------
#pragma mark - EZAudioPlayerDelegate
//------------------------------------------------------------------------------

- (void)  audioPlayer:(EZAudioPlayer *)audioPlayer
          playedAudio:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
          inAudioFile:(EZAudioFile *)audioFile
{
    [self.fft computeFFTWithBuffer:buffer[0] withBufferSize:bufferSize];
}

//------------------------------------------------------------------------------
#pragma mark - EZAudioFFTDelegate
//------------------------------------------------------------------------------

- (void)        fft:(EZAudioFFT *)fft
 updatedWithFFTData:(float *)fftData
         bufferSize:(vDSP_Length)bufferSize
{
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
        [weakSelf.audioPlot updateBuffer:fftData withBufferSize:(UInt32)bufferSize];
        for (int i = 0; i<bufferSize; i++) {
            NSLog(@"fft val %f", fftData[i]);
        }
    });
}

The NSLog is there to demonstrate my problem. I've picked two random lists from a time point and pasted them below. Basically I thought that the maximum that an FFT value could be was 1 but in one of them there is a value of 6.952471 and also in the other all the values are tiny e.g. 0.001 even though I can hear it's playing something. What am I doing wrong?

I am trying to reproduce something that uses Flash's computeSpectrum function which returns an array of 256 floats between 0 and 1. This seems pretty easy!

http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/media/SoundMixer.html#computeSpectrum()

Point 1 | Point 2
------- | -------
0.005816 | 6.952471
0.010704 | 1.260185
0.004519 | 0.266269
0.002781 | 0.152540
0.038111 | 0.073087
0.274509 | 0.125507
0.004770 | 0.010155
0.001016 | 0.023328
0.000611 | 0.032141
0.000115 | 0.003583
0.001193 | 0.009749
0.000731 | 0.011863
0.000630 | 0.027307
0.000996 | 0.002040
0.000452 | 0.005508
0.002430 | 0.004133
0.001266 | 0.006598
0.000089 | 0.001877
0.000483 | 0.002913
0.000826 | 0.004395
0.005266 | 0.003958
0.014211 | 0.002250
0.008589 | 0.008281
0.004920 | 0.002627
0.008350 | 0.002476
0.002999 | 0.003924
0.002559 | 0.007930
0.005831 | 0.004289
0.011384 | 0.001885
0.012413 | 0.005358
0.006980 | 0.002461
0.013374 | 0.000612
0.015251 | 0.005339
0.003759 | 0.002789
0.006180 | 0.000378
0.003345 | 0.001412
0.008930 | 0.001558
0.003787 | 0.002587
0.000106 | 0.001820
0.001123 | 0.000735
0.000924 | 0.001579
0.000151 | 0.001727
0.002845 | 0.000791
0.008067 | 0.001092
0.005768 | 0.002239
0.000094 | 0.001397
0.001194 | 0.002666
0.001379 | 0.001862
0.006607 | 0.002025
0.000655 | 0.002483
0.004555 | 0.001759
0.000482 | 0.001794
0.002681 | 0.002802
0.000500 | 0.000716
0.002257 | 0.002413
0.000055 | 0.001505
0.001836 | 0.002478
0.002179 | 0.001258
0.006315 | 0.001065
0.000939 | 0.001652
0.000145 | 0.002018
0.004384 | 0.001556
0.002653 | 0.002018
0.000954 | 0.001747
0.000014 | 0.001198
0.000021 | 0.001133
0.000004 | 0.000924
0.000018 | 0.000901
0.000051 | 0.000748
0.000023 | 0.000646
0.000022 | 0.000739
0.000114 | 0.000669
0.000567 | 0.000658
0.001699 | 0.000710
0.000680 | 0.000622

Source: (StackOverflow)

How to scroll audio graph in EZAudio in iOS?

I am using EZAudio library to record/play audio and plotting a graph with it's Rolling plot type. I want to make it scrollable so that I can go to specific portion of recording. Right now it is appending the new recording at the end but it is also deleting it from start. Please help...


Source: (StackOverflow)

EZaudio execute in Background

I tried.

__block UIBackgroundTaskIdentifier task=0;
task=[application beginBackgroundTaskWithExpirationHandler:^{
    NSLog(@"Expiration handler called %f",[application backgroundTimeRemaining]);
    [application endBackgroundTask:task];
    task=UIBackgroundTaskInvalid;
}];

I want to display waveform when I record the audio.I have use the EZAudio class but it will crash when I press HOME button or Lock Screen. How can I run the app in background please help me.


Source: (StackOverflow)

How to get FFT Data in Swift using EZAudio?

I am working on a fft analysis in swift with ezaudio.

My problem is how can i get the all fft data from ezaudio.

I would make a algorithm to look is a frequency present when yes how much strong.

Example:

I looking in the FFT Data is the Frequency 2000Hz present, is this Frequency present how much energy it have.

Here my code:

import UIKit
import Accelerate

class ViewController: UIViewController, EZMicrophoneDelegate,     EZAudioFFTDelegate{

private let ViewControllerFFTWindowSize: vDSP_Length = 4096

var microphone: EZMicrophone!
var fft: EZAudioFFTRolling!

override func loadView() {
    super.loadView()

    //setup audio session
    let session = AVAudioSession.sharedInstance()
    do{
        try session.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try session.setActive(true)
    }catch{
        print("Audio Session setup Fails")
    }

    microphone = EZMicrophone(delegate: self, startsImmediately: true)
    fft = EZAudioFFTRolling(windowSize: ViewControllerFFTWindowSize, sampleRate: Float(microphone.audioStreamBasicDescription().mSampleRate), delegate: self)

    microphone.startFetchingAudio()
}

func microphone(microphone: EZMicrophone!, hasAudioReceived buffer: UnsafeMutablePointer<UnsafeMutablePointer<Float>>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {

    fft.computeFFTWithBuffer(buffer[0], withBufferSize: bufferSize)

}

func fft(fft: EZAudioFFT!, updatedWithFFTData fftData: UnsafeMutablePointer<Float>, bufferSize: vDSP_Length) {
    var maxF = fft.fftData

    print(maxF)

    var data = fft.fftData
    print(data)

    //here coming my algorithm


}


}

With this code its giving a strange output on console:

var data = fft.fftData
print(data)

Output: 0x00000001119be000

Many Many thanks for help


Source: (StackOverflow)

EZMicrophone with custom AudioStreamBasicDescription

I want to record audio from the mic, and I need the audio to be in a specific format. Here's the code I'm trying to run:

AudioStreamBasicDescription asbd;
memset(&asbd, 0, sizeof(asbd));
asbd.mBitsPerChannel   = 16;
asbd.mBytesPerFrame    = 2;
asbd.mBytesPerPacket   = 2;
asbd.mChannelsPerFrame = 1;
asbd.mFormatFlags      = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
asbd.mFormatID         = kAudioFormatLinearPCM;
asbd.mFramesPerPacket  = 1;
asbd.mSampleRate       = 8000;
self.microphone = [EZMicrophone microphoneWithDelegate:self];
[self.microphone setAudioStreamBasicDescription:asbd];

But the App crash.Here is the screen shot.How to fix it? enter image description here


Source: (StackOverflow)

How to show waveform corresponding to live audio stream(HTTP Live Streaming) in swift

I want to show audio waveforms corresponding streamed audio (HLS). I successfully implement streaming. Now I want to add audio waveforms in it. I am searched this, but result is nil. I am used AVPlayer to stream the audio from URL.

TestVC

import UIKit
import AVFoundation

class TestVC: UIViewController {

    var player = AVPlayer(url: LOW_WEBPROXY! as URL)

    @IBOutlet weak var playImg: UIImageView!

    override func viewDidLoad() {
        super.viewDidLoad()

        avPlayerSetup()
    }

    func avPlayerSetup() {
        do {
            try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
            do {
                try AVAudioSession.sharedInstance().setActive(true)
            } catch {
                print("Active error \(error.localizedDescription)")
            }
        } catch {
            print("Category error \(error.localizedDescription)")
        }
    }

    @IBAction func playBtnTapped(_ sender: UITapGestureRecognizer) {
        if playImg.image == UIImage(named: "play") {
            playImg.image = UIImage(named: "pause")
            player.play()
        } else{
            playImg.image = UIImage(named: "play")
            player.pause()
        }
    }
}

I want to implement audio waveform in this code. I am tried EZAudio (https://github.com/syedhali/EZAudio) but it not generate in live streaming.


Source: (StackOverflow)