EzDev.org

AudioKit

AudioKit - Swift audio synthesis, processing, & analysis platform for iOS, macOS and tvOS ||AudioKit - Powerful audio synthesis, processing, and analysis, without the steep learning curve. audiokit - powerful audio synthesis, processing, and analysis, without the steep learning curve.


AudioKit AKParametricEQ Filter type

what kind of filter does AudioKit AKParametricEQ use to perform signal equalization?

http://audiokit.io/docs/Classes/AKParametricEQ.html

Thank you


Source: (StackOverflow)

migrating TAAE2 to AudioKit 3

I have a large project built with The Amazing Audio Engine 2. I have struggled to get Inter-App-Audio integrated and would like to migrate to AudioKit 3.

Struggled, meaning, it integrates, but as soon as I select it as a generator the rendering just stops, the engine is in a disabled state.

What are the main differences with the audio systems? TAAE2 uses modules, each with a render block, that pushes and pops audio buffers from a render stack.

How does AudioKit render audio? What would be involved, on a high level, in migrating AEModules to Audiokit objects?


Source: (StackOverflow)

AKMIDIListener not receiving SysEx

I am using AudioKit's AKMIDIListener protocol on a class to listen for MIDI messages. This is working fine for standard messages such as Note On, but SysEx messages are not coming through.

func receivedMIDINoteOn(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
    NSLog("Note On \(noteNumber), \(velocity), \(channel)") // works perfectly
}
func receivedMIDISystemCommand(_ data: [MIDIByte]) {
    NSLog("SysEx \(data)") // never triggers

    // More code to handle the messages...
}

The SysEx messages are sent from external hardware or test software. I have used MIDI monitoring apps to make sure the messages are being sent correctly, yet in my app they are not triggering receivedMIDISystemCommand.

Are there any additional steps required to receive SysEx messages that I'm missing?

Thanks in advance for any clues.


Source: (StackOverflow)

audiokit: playing two oscillators simultaneously

Hello I'm working with AudioKit -- it is a stellar framework and I'm very happy so far just learning it. I'm working through a HelloWorld example and there is code for a UI button that engages an oscillator at a frequency...

My question is: if I want to play two oscillator tones at once, such as 432Hz and a perfect fifth above (ratio of 3:2 so 648Hz) how can I have them both play simultaneously? Is the correct design pattern to have a new node for each "tone" coming along?

class ViewController: UIViewController {

    var oscillator = AKOscillator()
    var osc2 = AKOscillator()

    override func viewDidLoad() {
        super.viewDidLoad()

        AudioKit.output = oscillator
        AudioKit.start()
    }

    @IBAction func toggleSound(sender: UIButton) {
        if oscillator.isPlaying {
            oscillator.stop()
            sender.setTitle("Play Sine Wave", forState: .Normal)
        } else {
          oscillator.amplitude = 1 //was:: random(0.5, 1)
          oscillator.frequency = 432 //was:: random(220, 880)
          osc2.amplitude = 1
          osc2.frequency = 648 //3:2 from 432Hz
            sender.setTitle("Stop Sine Wave at \(Int(oscillator.frequency))Hz", forState: .Normal)
        }
        sender.setNeedsDisplay()
    }

}

How can I chain the two oscillators together so they can sing together?


Source: (StackOverflow)

Retrieving Data from AudioKit's FFT Plot

I'm working on a project that involves recording audio from the microphone of the iPhone, then feeds it through a Fast Fourier Transform (FFT).

I've found AudioKit.io has a demo in which it actively monitors microphone input and can display a plot of the FFT.

I have equations and logarithms that I plan to analyze the audio data with, so all I really need help with is retrieving the FFT data that is sent to this plot in AudioKit. I'm having a hard time finding the functions/methods that drive the data that fills this plot.

Can anyone point out where to find this FFT data in AudioKit?


Source: (StackOverflow)

iOS AudioKit / EZAudio FFT values

I have a Web Audio API based signal process application and I have to port it to iOS with AudioKit(based on EZAudio) framework.

I need only the FrequencyDomain that is contains numbers between 0-255 in Web Audio API.

But in the AudioKit the AKFFTTap fftData gives me back floats between -6 to 6 and sometimes <1000.

Here is what I have already tried on ios:

init process ...

let mic = AKMicrophone()
let fftTap = AKFFTTap.init(mic)

request ...

return fftTap.fftData

on Web Audio API: init...

var analyser = audioContext.createAnalyser();

request...

let freqDomain = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqDomain);
return freqDomain

How can I get back the same data?


Source: (StackOverflow)

How do I control an oscillator's frequency with a sequencer

I'd like to know how to control an AKOscillator's frequency using an AKSequencer, however the few examples(1, 2) I've seen online only show how to control an AKSampler with the AKSequencer.

Here's a simplified example from AudioKit's GitHub page:

// relevant class properties
var seq: AKSequencer?
var syn1 = AKSampler()

// viewDidLoad
seq = AKSequencer(filename: "seqDemo", engine: AudioKit.engine)
seq?.enableLooping()
seq!.avTracks[1].destinationAudioUnit = syn1.samplerUnit

What I expected:

Based on the above example, I'd expect to be able to do something like this:

var voice1 = Voice() // replaces the AKSampler

seq = AKSequencer(filename: "seqDemo", engine: AudioKit.engine)
seq.loopOn()
//seq.avTracks[0].destinationAudioUnit = voice1.oscillator.avAudioNode
seq.note = voice1.oscillator.frequency // "note" doesn't actually exist in the API

This doesn't do the trick obviously.

Question:

What is the proper set-up that will allow me to control an AKOscillator with the AKSequencer?


Source: (StackOverflow)

AudioKit buffer consuming a lot of ram

Opening a file or copying it makes buffer explode in size.

tape1.write(from: tape2.pcmBuffer)

The same thing happens after saving file and then opening it again (after composing i.e.)

AKAudioFile(forReading: url, commonFormat: .pcmFormatFloat32, interleaved: true)

That takes a lot of memory as well, 15min recording is ~300mb ram and iPhone 5s can not handle it. Is there a way to do it better? How to reduce buffer size?


Source: (StackOverflow)

wavetable parameters for audiokit

AudioKit is amazing and lets you start some oscillators and vary their frequency on-the-fly. Now I want to change the shape of the waveforms so that I can create custom timbres for my oscillators.

There are four standard types, actually five that AudioKit supports:

- sine
- triangle (good sine approximate)
- square wave
- sawtooth
- reverse sawtooth wave

All of them sound differently, but it would be really great if I could change the type of Waveform by using the built-in waveTable support.

http://audiokit.io/docs/Structs/AKTable.html#/s:vV8AudioKit7AKTable6valuesGSaSf_ Mentions the AKMorphingOscillator which is like a miracle class that can change waveForms for the oscillator. The defaults all work, but I am really new to populating the AKTable field.

The git page https://github.com/audiokit/AudioKit/blob/master/AudioKit/Common/Internals/AKTable.swift shows that:

/// A table of values accessible as a waveform or lookup mechanism
public struct AKTable {

    // MARK: - Properties

    /// Values stored in the table
    public var values = [Float]()

    /// Number of values stored in the table
    var size = 4096

    /// Type of table
    var type: AKTableType

    // MARK: - Initialization

    /// Initialize and set up the default table 
    ///
    /// - parameter tableType: AKTableType of teh new table
    /// - parameter size: Size of the table (multiple of 2)
    ///
    public init(_ tableType: AKTableType = .Sine, size tableSize: Int = 4096) {
        type = tableType
        size = tableSize
        ....

So my question is, can I directly access the values array and simply modify it to make new waveforms? Is there a sensible or idiomatic way to do this?

Danke.


Source: (StackOverflow)

CoreAudio crash - AVAudioIONodeImpl.mm:365: _GetHWFormat: required condition is false: hwFormat

I'm working with two application modules:

1) Recording module with this audioSession setup:

try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setPreferredIOBufferDuration(0.05)
try self.audioSession.setActive(true)

2) Recording module with this audioSession setup:

try audioSession.setCategory(AVAudioSessionCategoryPlayback) 
try audioSession.setMode(AVAudioSessionModeDefault)
try self.audioSession.setActive(true)

For each passage from 1->2 and 2-1 I have a try self.audioSession.setActive(false)

If I pass from 1) module to 2) or redo 1) all works fine. Than if from 2) I come to 1) I get this error on try self.audioSession.setActive(true)

This is the error:

ERROR:    [0x16e10b000] >avae> AVAudioIONodeImpl.mm:365: 
_GetHWFormat: required condition is false: hwFormat

What is this error related to? I can't find any help on Apple iOS documentation to understand where the problem can be.

Does anybody have any tip?


Source: (StackOverflow)

Positional Audio in SceneKit / ARKit with AudioKit

I have been exploring positional audio in Scene Kit / ARKit and would like to see if it’s possible to use AudioKit in this context to benefit from its higher level tools for analysis, sound generation, effects, etc.

For some background, a SCNView comes with an AVAudioEnvironmentNode, an AVAudioEngine, and an audioListener (SCNNode). These properties come initialized, with the environment node already configured to the engine. A positional audio source is added to the scene via a SCNAudioPlayer, which can be initialized with an AVAudioNode - the AVAudioNode must be connected to the environment node and have a mono output.

The SCNAudioPlayer is then added to a SCNNode in the scene and automatically takes care of modifying the output of the AVAudioNode according to its position in the scene as well as the position and orientation of the audioListener node.

My hope is that it will be possible to initialize AudioKit with the AVAudioEngine property of a SCNView, configure the SCNView’s environment node within the engine’s node graph, use the AVAudioNode property of AKNodes to initialize SCNAudioPlayers, and ensure all sources properly connect to the environment node. I’ve already began modifying AudioKit source code, but having trouble figuring out what classes I will need to adapt and how to integrate the environment node into the AudioKit pipeline. In particular I’m having trouble understanding connectionPoints and the outputNode property.

Does anyone believe this might be not possible given how AudioKit is structured or have any pointers about approach?

I will of course be happy to share any of my findings.


Source: (StackOverflow)

Cropping MIDI file using AudioKit

I am trying to crop and loop a certain part of a MIDI file using AudioKit.

I am using a sequencer and found a couple of things that are close to what I need, but not exactly.

I found a method in AKSequencer called clearRange. With this method I am able to silence the parts of the MIDI I don't want, but I haven't found a way to trim the sequencer and only keep the part I am interested in. Right now only the part I want has sound but I still get the silent parts.

Is there a way to trim a sequencer or to create a new sequencer with only the portion I want to keep from the original one?

Thanks!


Source: (StackOverflow)

AudioKit FFT conversion to dB?

First time posting, thanks for the great community!

I am using AudioKit and trying to add frequency weighting filters to the microphone input and so I am trying to understand the values that are coming out of the AudioKit AKFFTTap.

Currently I am trying to just print the FFT buffer converted into dB values

for i in 0..<self.bufferSize {
    let db = 20 * log10((self.fft?.fftData[Int(i)])!)
    print(db)
}

I was expecting values ranging in the range of about -128 to 0, but I am getting strange values of nearly -200dB and when I blow on the microphone to peg out the readings it only reaches about -60. Am I not approaching this correctly? I was assuming that the values being output from the EZAudioFFT engine would be plain amplitude values and that the normal dB conversion math would work. Anyone have any ideas?

Thanks in advance for any discussion about this issue!


Source: (StackOverflow)

AudioKit - no sound output

I try to use AudioKit to ouput a pure sine wave. https://github.com/AudioKit/AudioKit I've tried to set up a new project like on there homepage, and also tried there "Hello World" Example, included in the AudioKit Download.

Both builds fine, and the Hello World example displays the generated sine wave, correctly on the screen, but there is no sound playing, I can't hear anything. I've set the volume to max and tried with other apps, if they were playing audio fine, and they do. Are there any suggestions?

I am using Xcode 8.3.3 and as device an iPad Mini 2 with iOS 10.3.1

Thank you


Source: (StackOverflow)

AudioKit: Fade in from AKOperationGenerator

Is there a way to fade in a sound from an AKOperationGenerator?

E.g. in the code below .start() begins at full amplitude with a click.

let whiteNoiseGenerator = AKOperationGenerator { _ in

    let white = AKOperation.whiteNoise()
    return white
}


AudioKit.output = whiteNoiseGenerator
whiteNoiseGenerator.start()

Source: (StackOverflow)