LittleHolland: Continuous Machine Learning for Electronic Music Composition
LittleHolland: Continuous Machine Learning for Electronic Music Composition
Authors: Volodymyr Ovcharov (Kyiv Institute of Cybernetics)
Year: 2024
Abstract
The landscape of generative music systems has seen significant advancements with projects like MMM, Music Transformer, MuseNet, Jukebox, and MIDI-DDSP, each contributing unique methodologies and applications to the field. Despite these innovations, several challenges and opportunities remain, particularly in the realm of continuous, automated music composition that leverages deep learning and modern AI architectures. LittleHolland aims to address these gaps and build upon the strengths of existing projects.
Key Motivations:
Enhanced Multi-Track Composition Control:
- MMM demonstrates the importance of maintaining separate time-ordered sequences for each track to allow precise control over individual tracks in multi-track compositions .
- LittleHolland will expand on this by integrating Mamba architecture to handle complex dependencies across multiple tracks, providing even finer control and customization options for composers.
Versatile MIDI and Audio Synchronization:
- MIDI-DDSP shows the potential of synchronizing MIDI with audio for realistic sound synthesis .
- LittleHolland aims to improve this synchronization by using advanced deep learning techniques, ensuring high fidelity and seamless integration between MIDI inputs and synthesized audio outputs.
Iterative Resampling and Customization:
- The iterative resampling feature in MMM allows users to refine specific sections of music iteratively .
- LittleHolland will enhance this by incorporating more sophisticated machine learning models to offer dynamic and adaptive resampling capabilities, giving users greater flexibility and creative control.
Adaptive Note Density and Rhythmic Complexity:
- Projects like Music Transformer and MuseNet have explored adaptive note density and complex rhythmic patterns .
- LittleHolland seeks to provide even more advanced tools for adjusting note density and rhythmic complexity, leveraging the scalability of the Mamba architecture to handle intricate musical variations effectively.
Integration of Textual Prompts and Stylistic Transfer:
- OpenAI’s DALL-E for Music and MuseNet have shown the potential of using textual prompts for generating music in various styles .
- LittleHolland will incorporate similar capabilities, allowing users to input textual descriptions to guide the musical style and mood, thereby enhancing the creative process with intuitive and user-friendly controls.
Continuous Learning and Real-Time Adaptation:
- The dynamic nature of Jukebox, which focuses on raw audio generation, highlights the need for continuous learning and real-time adaptation in music generation .
- LittleHolland aims to implement a continuous learning framework, where the model adapts in real-time based on user feedback and evolving musical trends, ensuring that the generated music remains fresh and relevant.
Architecture and Representation for LittleHolland
LittleHolland is designed to leverage advanced deep learning architectures to achieve continuous and sophisticated electronic music composition. The architecture combines multiple innovative components to ensure precise control, high fidelity, and real-time adaptability.
Core Components
Mamba Architecture
- Overview: At the heart of LittleHolland is the Mamba architecture, a flexible and scalable neural network designed to handle complex dependencies in multi-track music generation. It integrates various neural network layers to capture both short-term and long-term dependencies in music sequences.
- Components:
- Encoder-Decoder Structure: Utilizes an encoder to process input sequences and a decoder to generate output sequences, similar to Transformer architectures but optimized for music data.
- Attention Mechanism: Employs multi-head attention to focus on different parts of the input sequence, allowing for intricate patterns and relationships in music.
- Positional Encoding: Enhances the model's ability to understand the order of notes and beats in the sequence, critical for maintaining musical coherence.
Multi-Track Representation
- Separate Time-Ordered Sequences: Each track (e.g., drums, bass, melody) is maintained as an independent time-ordered sequence, allowing for precise control over individual tracks.
- Track Embeddings: Each track is embedded into a high-dimensional space, capturing its unique characteristics and enabling seamless integration with other tracks.
BarFill Representation
- Gap Filling: In scenarios requiring bar-level control, bars to be predicted are removed and placeholder tokens are inserted. The model fills these gaps based on the surrounding musical context, ensuring continuity and coherence.
- Dynamic Bar Management: Handles varying bar lengths and structures, adapting to different musical styles and compositions.
Advanced Features
Iterative Resampling
- User Interaction: Users can iteratively resample sections of music, refining and modifying specific parts while preserving others. This allows for the creation of complex arrangements and subtle variations.
- Dynamic Adjustment: The model continuously learns from user inputs and adjusts its outputs in real-time, enhancing creativity and personalization.
Note Density and Complexity Control
- Adaptive Density Control: Allows users to specify the note density for each track, providing control over the rhythmic and harmonic complexity of the generated music.
- Complexity Parameters: Users can adjust parameters such as polyphony, syncopation, and note duration, tailoring the musical output to their preferences.
Integration with VST Synthesizers
- VST Integration: Supports integration with popular VST synthesizers like Osiris and VirusTi, allowing for high-quality sound synthesis and real-time parameter adjustments.
- Parameter Modulation: AI models can modulate VST parameters in real-time, achieving dynamic sound variations and enhancing the expressiveness of the music.
Continuous Learning and Adaptation
- Real-Time Feedback Loop: Incorporates a continuous learning framework where the model adapts based on real-time user feedback and evolving musical trends. This ensures that the generated music remains fresh and relevant.
- Reinforcement Learning: Utilizes reinforcement learning techniques to optimize the music generation process, rewarding the model for producing desirable musical outcomes.
Implementation Details
Data Pipeline
- Data Collection and Preprocessing: Collects and preprocesses large datasets of MIDI files and audio recordings, ensuring a diverse and representative training set.
- Feature Extraction: Extracts relevant features from the MIDI and audio data, such as pitch, duration, velocity, and timbre, to train the neural networks effectively.
Model Training
- Training Regimen: Trains the model using a combination of supervised and unsupervised learning techniques, with a focus on minimizing loss functions related to musicality and coherence.
- Validation and Testing: Validates and tests the model on separate datasets to ensure generalization and robustness.
User Interface
- Interactive GUI: Provides an interactive graphical user interface (GUI) for users to input their musical preferences, control parameters, and visualize the generated music.
- Real-Time Editing: Enables real-time editing and playback of the generated music, facilitating an iterative and interactive composition process.
LittleHolland aims to revolutionize electronic music composition by integrating advanced deep learning techniques with user-friendly interfaces and real-time adaptability. By leveraging the Mamba architecture, multi-track and bar-level representations, and continuous learning frameworks, LittleHolland provides musicians with powerful tools to create sophisticated and innovative music.Key Features
- Iterative Resampling: Users can iteratively resample sections of music, refining and modifying specific parts while preserving others. This feature is particularly useful for creating subtle variations and complex arrangements.
- Note Density Control: MMM allows users to specify the note density for each track, providing control over the rhythmic and harmonic complexity of the generated music.
- Interactive Demo: An interactive demo showcases MMM's capabilities, allowing users to experiment with various parameters such as track instrumentation and note density.
Applications for LittleHolland
LittleHolland integrates seamlessly with VST3, the latest version of the Virtual Studio Technology (VST) framework developed by Steinberg. This integration allows LittleHolland to provide powerful tools for music producers to create, modify, and enhance music compositions by leveraging advanced AI capabilities. Here, we describe the VST3 framework and provide a simple example of how to create a VST3 plugin that transfers MIDI and audio data to the LittleHolland server/database.
VST3 Framework from Steinberg
VST3 is a powerful and flexible audio plugin interface standard that provides enhanced features and capabilities compared to its predecessors. It enables developers to create plugins that can process audio and MIDI data with high precision and efficiency. Key features of VST3 include:
- Sample-Accurate Automation: Allows precise control over plugin parameters.
- Improved Event Handling: Efficient processing of MIDI and audio events.
- Resizable GUIs: Enables dynamic resizing of plugin interfaces.
- Audio Inputs for VST Instruments: Supports side-chaining and audio routing.
- Multiple MIDI Ports: Handles multiple MIDI input and output ports.
Creating a VST3 Plugin for LittleHolland
Below is a simple example of how to create a VST3 plugin that transfers MIDI and audio data to the LittleHolland server/database. This example uses the VST3 SDK and demonstrates the basic setup for a plugin that can capture MIDI and audio data and send it to a remote server.
Prerequisites
- VST3 SDK: Download the VST3 SDK from Steinberg's website.
- Development Environment: Set up a C++ development environment with CMake support.
Example Code
Project Structure
cssLittleHollandVST/ ├── CMakeLists.txt ├── src/ ├── LittleHollandProcessor.cpp ├── LittleHollandProcessor.h ├── LittleHollandController.cpp ├── LittleHollandController.h ├── LittleHollandFactory.cpp └── resources/ ├── vstentry.cpp ├── version.h ├── resource.hCMakeLists.txt
cmakecmake_minimum_required(VERSION 3.10) project(LittleHollandVST) set(SMTG_MY_PLUGINS_NAME "LittleHollandVST") add_subdirectory(${VST3_SDK_ROOT} vst3sdk) include(${SDK_ROOT}/cmake/VST3Helper.cmake) set(target littleholland_vst) smtg_add_vst3_plugin(${target} SOURCES src/LittleHollandProcessor.cpp src/LittleHollandProcessor.h src/LittleHollandController.cpp src/LittleHollandController.h src/LittleHollandFactory.cpp resources/vstentry.cpp resources/version.h resources/resource.h ) smtg_add_vst3_resource(${target} RESOURCES resources/resource.h resources/version.h )LittleHollandProcessor.h
cpp#pragma once #include "public.sdk/source/vst/vstaudioeffect.h" #include <curl/curl.h> namespace LittleHolland { class LittleHollandProcessor : public Steinberg::Vst::AudioEffect { public: LittleHollandProcessor(); ~LittleHollandProcessor(); static FUnknown* createInstance(void*) { return (IAudioProcessor*)new LittleHollandProcessor(); } //---from AudioEffect--- tresult PLUGIN_API initialize(FUnknown* context) SMTG_OVERRIDE; tresult PLUGIN_API process(Steinberg::Vst::ProcessData& data) SMTG_OVERRIDE; tresult PLUGIN_API setupProcessing(Steinberg::Vst::ProcessSetup& setup) SMTG_OVERRIDE; tresult PLUGIN_API setState(IBStream* state) SMTG_OVERRIDE; tresult PLUGIN_API getState(IBStream* state) SMTG_OVERRIDE; protected: void sendDataToServer(const std::string& data); private: CURL* curl; }; } // namespace LittleHollandLittleHollandProcessor.cpp
#include "LittleHollandProcessor.h" #include <iostream> #include <sstream> using namespace Steinberg::Vst; using namespace LittleHolland; LittleHollandProcessor::LittleHollandProcessor() : curl(curl_easy_init()) { setControllerClass(MyControllerUID); } LittleHollandProcessor::~LittleHollandProcessor() { if (curl) { curl_easy_cleanup(curl); } } tresult PLUGIN_API LittleHollandProcessor::initialize(FUnknown* context) { tresult result = AudioEffect::initialize(context); if (result != kResultOk) { return result; } addAudioInput(UGainAudioInput, Vst::SpeakerArr::kStereo); addAudioOutput(UGainAudioOutput, Vst::SpeakerArr::kStereo); addEventInput(UGainEventInput, 16); return kResultOk; } tresult PLUGIN_API LittleHollandProcessor::setupProcessing(ProcessSetup& setup) { return AudioEffect::setupProcessing(setup); } tresult PLUGIN_API LittleHollandProcessor::setState(IBStream* state) { return kResultOk; } tresult PLUGIN_API LittleHollandProcessor::getState(IBStream* state) { return kResultOk; } tresult PLUGIN_API LittleHollandProcessor::process(ProcessData& data) { if (data.inputParameterChanges) { int32 numParamsChanged = data.inputParameterChanges->getParameterCount(); for (int32 index = 0; index < numParamsChanged; index++) { IParamValueQueue* paramQueue = data.inputParameterChanges->getParameterData(index); if (paramQueue) { int32 sampleOffset; ParamValue value; int32 numPoints = paramQueue->getPointCount(); switch (paramQueue->getParameterId()) { default: break; } } } } if (data.inputEvents) { int32 numEvents = data.inputEvents->getEventCount(); for (int32 index = 0; index < numEvents; index++) { Event event; if (data.inputEvents->getEvent(index, event) == kResultOk) { if (event.type == Event::kNoteOnEvent || event.type == Event::kNoteOffEvent) { std::stringstream ss; ss << "Note: " << (event.type == Event::kNoteOnEvent ? "On" : "Off") << ", Pitch: " << event.noteOn.pitch << ", Velocity: " << event.noteOn.velocity; sendDataToServer(ss.str()); } } } } return kResultOk; } void LittleHollandProcessor::sendDataToServer(const std::string& data) { if (!curl) return; curl_easy_setopt(curl, CURLOPT_URL, "http://littleholland.server/api/upload"); curl_easy_setopt(curl, CURLOPT_POSTFIELDS, data.c_str()); CURLcode res = curl_easy_perform(curl); if (res != CURLE_OK) { std::cerr << "CURL error: " << curl_easy_strerror(res) << std::endl; } }LittleHollandController.h
#pragma once #include "public.sdk/source/vst/vsteditcontroller.h" namespace LittleHolland { class LittleHollandController : public Steinberg::Vst::EditController { public: LittleHollandController() {} ~LittleHollandController() override {} static FUnknown* createInstance(void*) { return (IEditController*)new LittleHollandController(); } tresult PLUGIN_API initialize(FUnknown* context) SMTG_OVERRIDE; private: }; } // namespace LittleHollandLittleHollandController.cpp
#include "LittleHollandController.h" using namespace Steinberg::Vst; using namespace LittleHolland; tresult PLUGIN_API LittleHollandController::initialize(FUnknown* context) { return EditController::initialize(context); }LittleHollandFactory.cpp
#include "public.sdk/source/vst/vstcomponent.h" #include "public.sdk/source/vst/vstcomponentbase.h" #include "public.sdk/source/vst/vsteditcontroller.h" #include "public.sdk/source/vst/utility/stringconvert.h" #include "public.sdk/source/vst/utility/logging.h" #include "pluginterfaces/base/ftypes.h" #include "pluginterfaces/base/funknown.h" #include "pluginterfaces/vst/ivstcomponent.h" #include "pluginterfaces/vst/ivsteditcontroller.h" using namespace Steinberg::Vst; using namespace LittleHolland; BEGIN_FACTORY_DEF("LittleHolland", "http://www.yourcompany.com", "mailto:info@yourcompany.com") //---First Plug-in included in this factory------- // its k```cpp //---First Plug-in included in this factory------- // its kVstAudioEffectClass component // with the kVstComponentClass category // This Plug-in will be of type kInstrument // with the following identifier DEF_CLASS2(INLINE_UID_FROM_FUID(MyProcessorUID), PClassInfo::kManyInstances, // cardinality kVstAudioEffectClass, // the Component category (do not change this) "LittleHolland Processor", // the Plug-in name Vst::kDistributable, // means the Component is distributable (in a bundle) "Fx", // Subcategory for this Plug-in "1.0.0", // Plug-in version "VST 3.7.0", // the Plug-in VST 3 version MyProcessorUID) // the Processor UID //---Second Plug-in included in this factory------- // its kVstComponentControllerClass component // with the kVstComponentClass category // This Plug-in will be of type kInstrument // with the following identifier DEF_CLASS2(INLINE_UID_FROM_FUID(MyControllerUID), PClassInfo::kManyInstances, // cardinality kVstComponentControllerClass, // the Controller category (do not change this) "LittleHolland Controller", // the Plug-in name Vst::kDistributable, // means the Component is distributable (in a bundle) "Fx", // Subcategory for this Plug-in "1.0.0", // Plug-in version "VST 3.7.0", // the Plug-in VST 3 version MyControllerUID) // the Controller UID END_FACTORYresources/vstentry.cpp
#include "public.sdk/source/main/pluginfactory.h" #include "LittleHollandProcessor.h" #include "LittleHollandController.h" #define stringSubCategory "Instrument" //------------------------------------------------------------------------ // Module init/exit //------------------------------------------------------------------------ bool InitModule () { return true; } bool DeinitModule () { return true; } //------------------------------------------------------------------------ // Create Plugin factory //------------------------------------------------------------------------ BEGIN_FACTORY_DEF ("LittleHolland", "http://www.littleholland.com", "mailto:info@littleholland.com") //---First Plug-in included in this factory------- // its kVstAudioEffectClass component // with the kVstComponentClass category // This Plug-in will be of type kInstrument // with the following identifier DEF_CLASS2 (INLINE_UID_FROM_FUID (LittleHolland::MyProcessorUID), PClassInfo::kManyInstances, // cardinality kVstAudioEffectClass, // the Component category (do not change this) "LittleHolland Processor", // the Plug-in name Vst::kDistributable, // means the Component is distributable (in a bundle) stringSubCategory, // Subcategory for this Plug-in "1.0.0", // Plug-in version "VST 3.7.0", // the Plug-in VST 3 version LittleHolland::MyProcessorUID) // the Processor UID //---Second Plug-in included in this factory------- // its kVstComponentControllerClass component // with the kVstComponentClass category // This Plug-in will be of type kInstrument // with the following identifier DEF_CLASS2 (INLINE_UID_FROM_FUID (LittleHolland::MyControllerUID), PClassInfo::kManyInstances, // cardinality kVstComponentControllerClass, // the Controller category (do not change this) "LittleHolland Controller", // the Plug-in name Vst::kDistributable, // means the Component is distributable (in a bundle) stringSubCategory, // Subcategory for this Plug-in "1.0.0", // Plug-in version "VST 3.7.0", // the Plug-in VST 3 version LittleHolland::MyControllerUID) // the Controller UID END_FACTORY
Explanation
This VST3 plugin for LittleHolland captures MIDI and audio data from a DAW and sends it to a remote server using HTTP POST requests. The processor class handles the audio and MIDI processing, while the controller class manages the plugin's user interface. The plugin uses libcurl for HTTP requests to communicate with the LittleHolland server.
Setting Up the Development Environment
- Download and Install VST3 SDK: Obtain the VST3 SDK from Steinberg's website and set it up in your development environment.
- Configure CMake: Ensure CMake is installed and properly configured to work with the VST3 SDK.
- Build the Plugin: Use CMake to generate project files for your development environment and build the plugin.
By integrating with the VST3 framework, LittleHolland can capture and process MIDI and audio data from various DAWs, enabling sophisticated music composition and real-time adjustments. This example provides a foundational approach to creating a VST3 plugin for LittleHolland, demonstrating how to send MIDI and audio data to a remote server for further processing.
Comparison with Similar Products/Researches
Here is a comparison table of MMM (Multi-Track Music Machine) with five similar products/researches in the field of generative music systems.
| Feature | MMM (Multi-Track Music Machine) | Music Transformer | MuseNet | Jukebox | MIDI-DDSP | OpenAI DALL-E for Music |
|---|---|---|---|---|---|---|
| Developed by | Jeff Ens, Philippe Pasquier | Google Brain | OpenAI | OpenAI | Google Brain | OpenAI |
| Architecture | Transformer | Transformer | GPT-like Transformer | VQ-VAE + Transformers | CNN + DDSP | Transformer |
| Focus | Multi-track music generation | MIDI music generation | Multi-instrumental, stylistic music generation | Raw audio generation | MIDI-to-audio synthesis | Text to music generation |
| Control Level | Track-level and bar-level | Note-level | Instrument and style-level | Track-level | Note and audio-level | Concept and style-level |
| Data Representation | Multi-Track and BarFill | MIDI | MIDI | Raw audio | MIDI and Audio | Textual prompts |
| Key Features | Iterative resampling, note density control | Relative positional encoding, attention mechanism | Multi-instrument support, stylistic transfer | Raw audio generation, high fidelity | Synchronization of MIDI and audio | Generates music from textual descriptions |
| Training Dataset | Lahk MIDI Dataset | Piano-e-Competition Dataset | Multiple MIDI datasets | Custom audio dataset | Various MIDI datasets | Various music and text datasets |
| Applications | Music composition, experimental development | Music composition, performance | Music composition, style transfer | Music composition, performance | Audio synthesis, music production | Music composition, creative tools |
| Interactive Demo | Yes | Yes | Yes | Yes | No | No |
| Publication Year | 2020 | 2019 | 2019 | 2020 | 2020 | 2021 |
| Source Link | MMM | Music Transformer | MuseNet | Jukebox | MIDI-DDSP | OpenAI DALL-E for Music |
Conclusion
MMM represents a significant advancement in generative music systems, offering enhanced control and flexibility for multi-track compositions. By leveraging the power of the Transformer architecture, MMM addresses the limitations of previous models and provides a robust framework for music generation and manipulation.
Here is an overview of several existing AI music projects that involve generating sounds using VST synthesizers like Osiris/VirusTi and applying AI to achieve varied parameters for creating appealing beats.
AI Music Projects with VST Synthesizers
Orb Producer Suite 3
- Description: A set of AI-powered MIDI generator plugins including Orb Chords, Orb Melody, Orb Bass, and Orb Arpeggio. The suite includes a full wavetable synthesizer, enabling users to generate complex musical patterns with advanced customization options.
- Features: Allows quick randomization of patterns and advanced customization of parameters like complexity, density, and polyphony. Synchronizes across the entire DAW project to ensure harmony.
- Application: Useful for music producers to quickly generate and manipulate MIDI patterns, integrating seamlessly with other VST plugins.
- Source: Production Music Live
Playbeat
- Description: An AI drum sequencer that automatically creates drum patterns based on specified parameters or existing phrases.
- Features: Offers both quick idea generation and in-depth editing of parameters such as steps and density. Includes three types of randomization algorithms for infinite variations.
- Application: Ideal for producers looking to create dynamic and varied drum patterns with ease.
- Source: Production Music Live
Magenta Studio
- Description: A set of five AI tools from Google available as Ableton Live plugins and standalone apps. Includes tools like Continue, Generate 4 Bars, Drumify, Interpolate, and Groove.
- Features: Allows for transformation of existing melodies and drum patterns, creating new musical ideas by merging rhythmic or melodic concepts.
- Application: Great for transforming and enhancing existing MIDI patterns in creative ways.
- Source: Magenta TensorFlow
Synplant
- Description: Uses AI to create synth patches from audio recordings, generating synthesized variations from dropped samples.
- Features: Provides various ways to sculpt sounds, including a unique DNA editor for further customization.
- Application: Suitable for sound designers and producers looking to create unique synth patches from audio samples.
- Source: Sonic Charge
Emergent Drums 2
- Description: An AI-powered plugin that generates original drum samples from scratch using generative models.
- Features: Utilizes Deep Sampling technology to create endless variations of personal samples. Functions as a 16-pad MIDI-playable instrument with multi-out support.
- Application: Perfect for producers needing unique and royalty-free drum sounds.
- Source: Native Instruments
Comparison Table
| Feature | Orb Producer Suite 3 | Playbeat | Magenta Studio | Synplant | Emergent Drums 2 |
|---|---|---|---|---|---|
| Developed by | Hexachords | Audiomodern | Sonic Charge | Audialab | |
| Focus | MIDI generation | Drum pattern generation | MIDI transformation | Synth patch generation | Drum sample generation |
| Control Level | Chords, melody, bass, arpeggio | Steps, density | Bars, melodies, rhythms | Audio samples to synths | Drum sounds |
| Integration | VST, DAW synchronization | VST, DAW integration | Ableton Live, standalone | VST | VST, MIDI-playable |
| Customization | Complexity, density, polyphony | Steps, density, randomization | Continuation, drumification, interpolation | DNA editor | Deep sampling |
| Application | Music composition, beat making | Drum programming | MIDI pattern transformation | Sound design | Drum sound creation |
| Source | Production Music Live | Production Music Live | Magenta TensorFlow | Sonic Charge | Native Instruments |
Conclusion
These AI tools and plugins offer various innovative features for music production, ranging from MIDI generation and transformation to drum sample creation and synth patch generation. They provide musicians and producers with powerful capabilities to explore new creative possibilities and enhance their music production workflows.
Additional References
- AudioCipher
- We Rave You
- Native Instruments Blog
- Make Use Of
- MusicRadar
- Audiomodern
- Algonaut Atlas
- Sonic Charge
- iZotope
- Evabeat
- KVR Audio
- LANDR Blog
- Loopmasters
- Sonic State
- Plugin Boutique
- MusicTech
- Synthtopia
- Bedroom Producers Blog
- Gear News
- Reverb
- Description: AudioCipher is a text-to-MIDI DAW plugin that converts words into musical ideas. It supports integration with various VST synthesizers, allowing users to create melodies and harmonies based on text input.
- Application: Ideal for composers looking for creative inspiration by turning textual concepts into MIDI sequences.
- Source: AudioCipher
- Description: An expansion pack for VPS Avenger 2 that introduces AI-generated melodies and patterns. The pack uses AI to create evolving presets across various genres, allowing users to control parameters through Macro controls.
- Application: Suitable for music producers who want to explore AI-driven melodies and patterns within a powerful synthesizer.
- Source: Plugin Boutique
- Description: An AI-powered singing synthesis software that creates realistic vocal performances. Users can generate vocal tracks by sketching melodies and adding lyrics, with fine control over pitch, timing, and expression.
- Application: Perfect for producing realistic and expressive vocal tracks for various music genres.
- Source: Native Instruments Blog
- Description: An AI-powered plugin that generates original drum samples from scratch using generative models. It also functions as a MIDI-playable instrument with multi-out support.
- Application: Ideal for creating unique drum sounds and patterns for electronic music production.
- Source: Native Instruments Blog
- Description: A suite of AI tools developed by Google that transforms MIDI patterns and creates new musical ideas. The tools include Continue, Generate 4 Bars, Drumify, Interpolate, and Groove.
- Application: Useful for transforming and enhancing existing MIDI patterns and generating new compositions.
- Source: Magenta TensorFlow
- Ens, Jeff, and Philippe Pasquier. "MMM: Exploring Conditional Multi-Track Music Generation with the Transformer." arXiv preprint arXiv:2008.06048 (2020).
- Jeffreyjohnens. "MMM: Multi-Track Music Machine." jeffreyjohnens.github.io.
- Metacreation. "MMM: Multi-Track Music Machine." metacreation.net.
- Huang, Cheng-Zhi Anna, et al. "Music Transformer: Generating Music with Long-Term Structure." Magenta TensorFlow.
- Payne, Christine. "MuseNet." OpenAI, openai.com/research/musenet.
- Dhariwal, Prafulla, et al. "Jukebox: A Generative Model for Music." OpenAI, openai.com/research/jukebox.
- Engel, Jesse, et al. "DDSP: Differentiable Digital Signal Processing." arXiv preprint arXiv:2008.01112.
- Ramesh, Aditya, et al. "Zero-Shot Text-to-Image Generation." OpenAI, openai.com/research/dall-e.
- Roberts, Adam, et al. "MusicVAE: Generating Music with Fine-Grained Control." Magenta TensorFlow.
- Hawthorne, Curtis, et al. "Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset." arXiv preprint arXiv:1810.12247.
- Vaswani, Ashish, et al. "Attention is All You Need." arXiv preprint arXiv:1706.03762.
- Raffel, Colin, et al. "Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching." arXiv preprint arXiv:1512.04946.
- Oore, Sageev, et al. "This Time with Feeling: Learning Expressive Musical Performance." arXiv preprint arXiv:1810.12247.
- Hsiao, Wen-Yi, et al. "Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs." arXiv preprint arXiv:2107.05931.
- Dong, Hao-Wen, et al. "MusPy: A Toolkit for Symbolic Music Generation." arXiv preprint arXiv:2008.07139.
- AudioCipher
- Plugin Boutique
- Native Instruments Blog
- Magenta TensorFlow
- Sonic Charge
Existing AI Music Projects Utilizing VST Synthesizers
1. AudioCipher
2. VPS Avenger 2 Generative AI Expansion Pack
3. Dreamtonics Synthesizer V Studio Pro
4. Audialab Emergent Drums 2
5. Magenta Studio
Comparison Table of Additional AI Music Projects
| Feature | AudioCipher | VPS Avenger 2 Generative AI | Dreamtonics Synthesizer V Studio Pro | Audialab Emergent Drums 2 | Magenta Studio |
|---|---|---|---|---|---|
| Developed by | AudioCipher Technologies | Manuel Schleis, Mirko Ruta, Andy Hinz | Dreamtonics | Audialab | |
| Focus | Text-to-MIDI generation | AI-generated melodies and patterns | AI-powered vocal synthesis | AI-generated drum samples | MIDI pattern transformation |
| Control Level | Textual input to MIDI | Macro controls for patterns | Pitch, timing, expression | Deep sampling of drum sounds | MIDI bars, melodies, rhythms |
| Integration | VST, DAW | VST, DAW integration | VST, DAW integration | VST, MIDI-playable | Ableton Live, standalone |
| Customization | Word-based musical ideas | Evolving presets across genres | Fine control over vocal parameters | Endless variations of samples | Continuation, drumification, interpolation |
| Application | Music composition | Music production | Vocal track production | Drum sound creation | Music composition |
| Source | AudioCipher | Plugin Boutique | Native Instruments Blog | Native Instruments Blog | Magenta TensorFlow |
Conclusion
These AI tools and plugins offer various innovative features for music production, ranging from MIDI generation and transformation to drum sample creation and synth patch generation. They provide musicians and producers with powerful capabilities to explore new creative possibilities and enhance their music production workflows.
Comments
Post a Comment