So come up to the lab / and see what's on the slab / I see you shiver with antici...
  • ...pation

    From Olivier’s Facebook page, we know that he is working on at least three new modules. Exciting stuff, but it is his clues about “module 3” that make me wonder if Dr Gillet is turning into Dr Frank N. Furter:

    • Started schematics/layout for new module 3. The board is so dense there are parts sneaked in under pots, and there’s a little mezzanine board above the processor!
    • Finished assembly of module 3 prototype. Yet another JTAG mistake O_O. Wrote firmware covering all I/O. Board bring-up status: 100%. Analog section tested and validated. Will start writing code next week…
    • Wrote view/controller code for new module 3.
    • Wrote more DSP code for new module 3.
    • Recorded tons of training data for new module 3. I love leaving machine learning/system identification algorithms crunching data and figuring out all the good stuff by themselves
    • Finished building a model used in a feature of module 3. It is still stuck a bit in the uncanny valley, but I have plenty of time before the production to improve things.
    • Wrote a significant chunk of new module 3’s firmware.
    • Implemented new module 3’s firmware updater.
    • Ordered new iteration of PCB prototypes for new module 3.
    • Added objectionable secret functions to new module 3.

    So, Olivier has been busy analysing data – presumably musical or audio data – using machine learning algorithms to build mathematical models of something. He previously used machine learning to build the Kohonen SOMs used in Grid, and prior to that, for feature extraction in his PhD thesis.

    But it is the mention of the uncanny valley that has me intrigued. Could it be that module 3 will be the first modular, voltage-controlled version of Vocaloid? Are the “objectionable secret functions” an imitation of coprolalic Tourette’s Syndrome?

  • I hope the long awaited Mutable Instruments sequencer will be coming up…
    The mentioning of the board being really dense makes me think it has a large analog section… Could that be a sequencer? Or maybe some kind of digitally controlled very complex analog vco?

  • Either he trained it on lotsa gangsta rap, or it was a combination of Frank Zappa, Blowfly and DJ Assault…

    It sounds both intriguing and fun, yet somehow a bit ominous. It will be interesting to see what develops as always.

  • You’re looking at this from the wrong angle…

    I could have very well thrown computational resources at figuring out the quirks of the Moog CP-3 mixer…

  • @BennelongBicyclist: One glaring error with this thread: the title should end with “antici” and the first word of your post should be “pation”.

    If it is some sort of vocal synthesizer, a style of analog formant filter would be a smart addition. Usually a serial bandpass, bandpass, lopass set up. That takes a lot of analogue components.
    Also, if you do make a vocal synth, add a USB keyboard input, please.

  • Man, now I want a coltage controlled version of Vocaloid.. :(

  • @audiohoarder: I had Riff Raff fix it.

  • Ahh, so you modeled old Neve/SSL/Fairchild units then? Just being hopeful. Whatever it will be, Polymoog or PS-3100 resonators, Sennheiser Vocoder or something I’m sure it will turn out great.

  • Does this look granular to you? From this week’s instalment on Olivier’s Facebook page, in which he further discusses the new modes, particularly module 3. At first I thought it was a wave-folded sawtooth wave, but I think it is more likely to be an amplitude envelope, in which case, it seems to be pretty grainy to me.

    Mungo g0, watch out! (As much as I want to promote Australian technology start-ups like Mungo Enterprises, there’s really no contest with the inventiveness, design and engineering elegance, and open-source internationalism of the Mutable Instruments approach. But gosh, this thing does look awesome, or ambitious, at least.)

  • Here are Olivier’s latest Facebook clues on the new modules:

    • Finished the DSP code for the first half of new module 2. This is a totally self-contained module with two major digital building blocks. One of them done!
    • Spent 10 days with the mrs, away from code, oscilloscopes and circuit boards.
    • Started sorting, annotating, and collecting data for the second DSP half of new module 2.

    My speculation regarding this, based on Olivier’s spectrograms shown above, was:

    Hmmm, data mining spectrograms, and that’s only for half of module 2… could it be that you are creating a Kohonen self-organisating map ( SOM) of spectrograms based on features of each that you are annotating, and then navigate this map, as you did in Grids? But for what purpose? Maybe to drive a granular synthesis engine in some clever way that I can’t imagine, yet?

    Does anyone else (who is not operating under a formal or informal non-disclosure agreement with Olivier) have any ideas of what he is up to? I’m just curious, and I love a puzzle, and Olivier clearly loves dropping clues. Which reminds me, did anyone crack the peaks Easter Egg, with the Edgar Allen Poe allusions and the Esperanto nine?

  • hum … , no idea. You know these are spectrograms? They seem very abruptly sliced. If it’s something melodic one might expect a bit more order and maybe a dominant frequency.

    Maybe something like Frames only that you process audio? So that one can jump between different filter/eq sections, a bit like vocoder snapshots, either on a signal that can be put in, or on some internal sound source (noise, whatever). If that makes any sense, probably not, i do not code any such stuff and it’s probably better that i leave that to others ...

  • @morcego Yes, that’s why I referred to them above as…spectrograms.

  • That spectrogram looks like the harmonic wave table view in audio term. If this is a custom wavetable/additive oscillator from MI, count me in!

    However, the additive slices/grains all look like Shepard tones to me. This could get interesting if it were used as a CV source. Endless cutoff sweeps, haha. For all we know Oliver could have a S&H triggering the different modes.

  • @audiohoarder Ah, you’re interpreting the spectrogram(s) as output from the new module. I had assumed they were slices for input to a data mining or clustering process, but you’re probably right. Maybe this new module is a granular synthesis engine in which you can navigate around an n- dimensional volume of grain types, clustered or ordered by their harmonic and envelope similarity? A bit like the timbre and colour parameters for Braids, but far more generalised. That would require only a limited amount of memory to store the grains in the map, which would be consistent with the use of a Cortex M4 processor without exertnal memory. Who knows, it might even be polyphonic – an M4 probably has enough horsepower to permit that.

  • There’s two thumbwheels bottom right. Are they a clue?

  • No, it’s part of the Sonic Visualizer interface.

  • Now that is a very cool tool!

  • @BennelongBicyclist: I thought it would be output because it looks like changing a mode of operation. If this is input data, I don’t see the average or even advanced Eurorack user being able to use it very well. Of course, this could be an under-the-hood calibration process the user isn’t meant to touch. That would make sense.

    What I am noticing on a second look is that there are harmonic bands that do not move while others move around around them in sweeps. This looks like some type of “128 band formant filter” which I would be familiar with from the K5000. There is a neat vowel filter in the Alesis Air FX which is cool, but can put you in the uncanny valley. A more basic use of this type of filter is to simulate string and woodwind sections.
    Anyway, if the input into the module were white noise – and it looks like white nose to me, this spectrogram would show the presets for the formant filter. Neat.
    I wouldn’t expect a Eurorack formant filter to be 100% editable on the faceplate because most users would get frustrated very fast.

  • @audiohoarder – no, I meant input into a data mining/clustering/data reduction process, to create a small set of data which is used to drive the module. as opposed to live, real-time input. Olivier did just that with percussive rhythms for Grids, and I suspect he may be doing the same for this module, using, um, not sure – maybe various grain features? The Sonic Visualiser can use a lot of very nifty Vamp plug-ins for feature extraction and annotation, and I wonder if Olivier has just concatenated a lot of samples together into a file (for convenience), and is using Sonic Visualiser to extract features from each sample in that file? Most advances are incremental or evolutionary, rather than revolutionary, and that would seem to be a logical step forward from both Braids and Grids. Of course, I am probably quite wrong.

  • Looking at my K5000s and playing it, there’s some clever tricks that simplifies live playing. However, the formant filter itself – while clever – is greatly helped by the FF bias, a LFO and a great deal of synergy with the harmonic grouping of the ADD additive sources. The harmonic grouping have a few dedicated controls, LFO and harmonic group envelopes. Using these it’s not always a question of manipulating every single frequency band of the formant filter.

    So, if the new module has some additive stuff I would expect that a little carry-over from the K5000 is there. Then again, it might be a granular gizmo.

    There’s plenty of old gold to dig up from olden digital synths: the simplified modelleling done in the WSA-1 where samples are affected by a resonator, the formant sequence stuff from the FS1r, harmonic grouping and formant filtering from the K5000… Then we have the stuff already covered, wavetables, CZ stylee phase mod, old school Ensoniq.

    I’m still waiting for the MI Vector/Wave sequence module too :)

  • @BennelongBicyclist: I am probably just reading my own experience into the graph as well. The biggest guess I made is that this would be the output signal.
    If this is a compressed frequency data set, I have no idea what it is for. Euclidean algorithms are a bit easier to wrap my head around. I’m sure it will be very useful either way. :)

    @Jojjelito: Yeah, the FF Bias is awesome for live tweaking. If anyone ever makes a perfect clone of the K5000 filter section, I will be very happy. It is my favorite filter after all. I’d even settle for just the low pass and high pass with crazy resonance.
    Isn’t the MI vector sequencer Frames? I’m sure it can be used like one.

  • > I wonder if Olivier has just concatenated a lot of samples together into a file (for convenience), and is using Sonic Visualiser to extract features from each sample in that file?

    For features extraction, there’s nothing that can be done with SV that cannot be done more conveniently from a command line.

    Concatenating stuff together for features analysis is a strange idea, because it adds discontinuities effects that introduce outliers for every analysis window that contains a transition. Plus you can’t parallelize processing of a single big file!

    You all have interesting theories :) But let’s say that if you have seen it in another synth, or if there’s a term for it well-known among musicians, then it’s probably not something I’d like doing!

  • @audiohoarder: Yeah, Frames is awesome but I was thinking some kind of decently polyphonic vector synth. Preferably with the vector sequence stuff closely connected to the oscillators. Something like a WS with VS character and usable filters. Not some kind of non-resonant ersatz filter ala WS or VFX. Those synths are nice too, but there’s room for improvement.

    Guess we hafta wait and see. I wanna hear some weird sound snippets from whatever is coming, but I’m sure they will appear in due course.

  • @pichenettes yes, quite right – I keep telling my colleagues: “Never do by hand anything that you can tell the computer to do, repeatably, with a bit of code” and “Always eat with a fork and spawn.”

    OK, so it is an output spectrogram. I am still sticking to my theory of a CV-navigable n-dimensional volume of clustered or SOMed grains. A multi-dimensional granular Braids.

  • @Jojjelito: VS character is just a lot of HZ aliasing. In both directions! It has nothing to do with he bit depth. You can get close approximations on the Prophet 12 with the oscillator character section.
    The Kawai K1 and K4 are pretty great at doing vector style stuff thanks to the flexible envelopes. However the most fleshed out vector sequencer is definitely the TG33. Of course they sound very clean and more like samplers.
    I do understand your frustrations though. I would love a modern sampler with resonant analogue filters and 4 oscillators. So a Prophet 12 with sampling, haha. So many older synthesizers got very close, but they just missed some obvious features.
    I am also interested in sound demos for the new modules. 4 new modules can cover a wide range of applications. I think a full I system will need to be at least 6U at this point.

  • @audiohoarder: There’s a remedy out there: Just use some nifty ITB solution like Kontakt or the Korg software WS. If a P12 focused too much on being a hardware sampler like those of yore it would be a lead balloon. I gave up on ever seeing a poly vector synth realized as hardware. Or… Guess a P12 with a joystick, a small wave ROM, wave sequencing and the joystick… Maybe the Solaris then.

    I betcher there’s a purinsesu Kenny out there intent of sabotaging the design of my dream synth. That has to be it, dear old Occam says so :p

  • @Jojjelito: There is a big difference between a sampling synthesizer and a rompler, but I more than understand DSI wanting to keep their instruments focused. I still don’t understand why the Tempest can’t import samples form the SD card, and I know I am not alone in that regard.

    Also, the WS VST has sampling? I just know the legacy collection VSTs are free with a Korg registration now. I never thought of it sampling. Also, Kontakt/Komplete is a bit too bulky and expensive to keep upgrading in my opinion. I don’t personally use it in my setup, but I am familiar with it. Not to mention the licencing for selling Kontakt patches.

  • @audiohoarder: Nope, no sampling in the legacy collection, but at least it’s a WS with the filter flaw corrected plus a decent ensamble mode using the other bits too and software renditions of other classics thrown in. On the other hand, a WS EX has a nice keyboard controller…

    Well, instead of getting too stuck in my gear fantasies I just make do with what’s here. It keeps me busy for a while.

    Anyways, I’m curious about the new MI modules. The collection will use four subracks before we know it.

  • @Jojjelito: Oh, I already knew that the legacy collection has resonant filters on all of the synthesizers. I never used them too much. The digital ones are pretty much right on the money for the sound!

    Now to re-rail the convo…
    No matter how complex we can imagine the potential modules, they will be simple to use. I am definitely glad that MI is making very popular digital modules in a format awash in all too similar analogue modules. It is taking away the stigma of “Digital? You should just use a computer!” from the Eurorack community. Of course, I will have to wait for the audio demos to be certain. ;)

  • Olivier has posted more clues about the mysterious new module 2, including a plot of frequency versus frequency deviation in cents for Pade approximants and what I presume are truncated Taylor series polynomials, presumably relating to this comment: “Experimented with approximations in various areas of new module 2′s code. They are actually not a bad thing at all and add a little flavor of analog detuning/instrument stretch-tuning.”

    I don’t know much about DSP, but are those used as interpolation filters when changing the sample rate in band-limited direct synthesis? Maybe they have other uses. (I am learning that DSP turns out to be closely related to time-series analysis in classical statistics, which is not very surprising, really, given that sound is a time series of air pressures…).

    Other clues provided by Olivier:

    • Wrote all the glue / meta-parameters code that wraps together the two halves of new module 2 in an interface ready to be grafted onto the hardware – the main class that needs to be fed a stream of trigger input, codec and ADC values!
    • Did most of the overall tuning/sound balancing of new module 2. Very happy with the results, but I want it badly to work with at least 2-3 voices of polyphony so optimizations will be necessary.

    Polyphony! Clearly the ADC values are read from pots and/or CVs, the trigger is presumably like the trigger for the physical models in Braids, and the reference to a codec suggests that input audio is being encoded, but is that as a Kohonen map or similar n-dimensional ordering of pre-sampled sounds or grains, or in real-time. Hmmm, could this be a real-time sampler? A real-time granulator? Do any such modules already exist (I can think of the Mungo g0, but there may be others). If there is already something like that, then that’s not what Olivier is building…

    Another clue: “Experimented with a nice stereo/spatial/verb output mode for new module 2.”

    Further guesses or speculation, anyone?

  • codec simply refers to an IC converting audio from analog to digital and back. These are not known as ADC or DAC because they include both function, and their architecture is optimized for the audio band (and most of the time AC-coupling), and pretty much all of them are 1-bit with some clever digital processing.

  • More clues from Olivier, this time a blurred photo of modules 3 and 4:

    Yeah, I tried a blind deconvolution to undo the Gaussian blur, but no success (didn’t really expect it to work)...

    However, it is clear that both of these modules are stereo, and that the buttons in the centre of the smaller module on the right choose the parameters shown in this earlier photo:

    One of the knobs on each side of that modules is labelled “SLOPE”.

    The two coloured knobs and inputs on the larger module on the left are for “CELL” and “BOKEH”, and the white knobs are for “PITCH” and “MIX”.

    Oh, and one of these new modules is named “Bridges”. One might be named “Grains”.

  • “BOKEH”? ok, googleing actually returns a very meaningful result ;)

    However, jacks with a dark square are typically outputs on MI modules and the larger module seems to have just one. You think the double row of knobs indicates stereo, but why are not both jacks in the same style?
    But would also guess it is an oscillator or sound generator of some sorts.

    The smaller module seems to have a more straightforward and symmetrical design. The LEDs look like an input level meter, so audio processing. This one really looks like stereo, are you sure the other is also?

  • Yep, the one with the BOKEH parameter has to be some type of reverb. I would not be surprised if the output were a stereo headphone output that could double as a line audio out. Of course a regular mono jack would work too.
    What would be really cool is the ability to do “live” convolution reverbs. Kind of like ring mod, but with reverb algorithms.
    I can’t wait to see what they are really about wither way!

  • bokeh always good. There’s an electronic artist called mind bokeh who I really tried to like just cos of his name – but couldn’t. He did snaffle a pretty cool handle tho

  • Or maybe it was an album

  • Bokeh is good, but it can also be over done or used as a crutch.

  • Which Dutch prog rock group of the ’70s, most famous for their yodelling, never released an album called “Bokeh”?

  • “I would not be surprised if the output were a stereo headphone output that could double as a line audio out.”

    —> a module with (only) a stereo headphone output? sounds weird. for a stereo module i would at least expect both signals as conventional mono outs. So stereo would only for those who use it as a last module (preferably with headphones) and those you want to stay within the modular get a mono out? I don’t think so.

    And why does it have to be stereo? The two rows of knobs could be related parameters, like offset and attenuation (i am not saying it is offset/attenuation, just an example that there are parameters that require two knobs). The smaller module for sure looks like a stereo module with audio input.

    let’s wait and see …

  • The small one seems to have at least compression abilities… If I stare on the 4 symbols between the LED’s I can see (from top down):
    1) attack/release
    2) vacuum tube – soft knee or soft overdrive?
    3) decay
    4) compression ratio
    so the meter LED’s are also used to choose the parameter for tweaking.

  • I think it’s a sort of reverb, and that tends to be the last signal in a chain. Or it could be used as a sort of modulated space effect. The single line/stereo and mono/euro would be compromise due to how cramped the panel already is.

    The one thing I am sure of – the big one is Grains and the smaller one is Bridges. Bridges seems to be the saturation module. It has an A/D envelope and a compressor. I think the middle two markings are for transistor/tube distortion and resonance?

    I am fine with being way off the mark, so I have no problem wildly guessing features.

  • We know the big one (Grains, aka Module 3), has an input DSP stage that passes parameters to an output or processing DSP stage. So maybe not stereo, but the two columns of knobs are for pairs of input/ analysis and output processing parameters? Agree with @Picard and @audiohoarder re the smaller module (Bridges, module 4 or 2).

  • Just had an idea, it may be called Bridges because it is the module used to connect your modular to your other gear? A compressor so as to not clip external line inputs makes sense to me.

  • Or it might not be called Bridges. I only suggested that because Olivier posted a line drawing of boats in a harbour with a bridge in the background, in the style of the Forth Bridge in Scotland. Maybe the it was a drawing of the Forth Bridge, and it was just a punning reference to the 4th new module?

    Or maybe the reference is to the boats in the harbour, in which case the module is called Vessels and it implements physical excitation modelling of drums and suchlike, or reverb in various sizes and types of vessels.

  • I’d propose: Docks while it is the connection part to the outside — in out direction it adds some tube flavour and compression and in the in direction it provides an envelope follower.

  • Here is Olivier’s latest doodle/clue for module 1:

    At first I thought they might be constellation diagrams for some form of phase shift keying? Or some form of quadrature? Or are those polar diagrams? Yup, it must be some form of spatialiser! And now I understand the pictorial reference to the Firth of Forth Rail Bridge (or a bridge like it). I’m not aware of any Eurorack module that does psycho-acoustic spatial manipulation, so that’s my best guess, given that the capability of the new module must be unique, or at least substantially novel. Hmmm, threefold symmetry.. so it is a hexaphonic surround-sound spatialiser, using clever phase modulation and psychoacoustics to render that with just a stereo pair of channels?

    1754 x 1241 - 75K
  • In case you are wondering what is written on the back of the piece of paper on which Olivier did his little doodle, it says:

    POS (position)
    PGM (program)

    Screen Shot 2014-07-05 at 1.32.38 pm.png
    1079 x 679 - 144K
  • I can’t wait!!!

  • First thing coming to mind when looking at the spectrogramm has been a granular sort of delay mixed with a granular shifting/phasing reverb. The hexagon and the way its vertices variously connect could represent the programs how the grains spread relatively in time, pitch and space.

  • hmm
    the vertical stripes are actually more like white noise. I’d be more inclined to think that it could be some sort of percussive synthesis module than anything else.

  • OK, the latest clues from Olivier, posted to the MI FB page:

    • Continued writing and experimenting with new module 1 DSP code.
    • Worked with Hannes on panel revisions for new module 1 and new module 2. a picture of Pollie Fallory, instantly recognisable to anyone who is a fan of Peter Greenaway films – she is subject number 74 from his first film, The Falls.

    Here is a transcript of what the narrator says about Pollie:

    Before the Violent Unknown Event, Pollie Fallory did indifferent bird imitations. She had impersonated a nightingale for twenty-seven nights in a play called ‘The Little Green Finches’, and she played the part of a budgerigar with clipped wings in a film called ‘The Reluctant Singer’.

    Her act had been accompanied by random fluttering gestures and the habit of singing through an almost closed mouth. When she employed an agent, he would always be telling her to open her mouth and freeze her arms.

    Translator: “After the Violent Unknown Event, Pollie Fallory spoke Mickel-ease or Mickel. It was a language full of alliteration, sudden turns of speech, high registers, changes in volume and unexpected silences in which the speaker took prolonged and exaggerated breaths. Waiting for the next syllable in Mickel-ease was like waiting for a child to scream after a fall. Pollie quickly assumed a command of Mickel-ease that stretched the human tongue and voice-box to influence the language of animals rather than the other way around.”

    In a belated response to the badgering of her former agents, Pollie’s body now stood rigid when she spoke or sang and remained that way, ideally unaccompanied by the slightest facial or body gesture.

    Except for an occasional patient smile, she indicated with her body as little as possible that might reflect on her speech. She was persuaded to re-learn English to reach and recruit a larger ornithological audience, to add the VUE anthem to her repertoire and to make a definitive version of the Bird List Song. Pollie became a raconteur. She also did woman imitations.

    So what are we to conclude from this? Perhaps nothing, apart from Olivier’s admiration for Greenaway’s film and/or Nyman’s music (both of which I share). Or perhaps the new Module 2 gives input signals the characteristics of Mickel-ease, as described?

  • Ok, so module two is some kind of vocal thing. We know it isn’t a vocoder, but it could be a resonator that simulates vocal chords. Not just human ones, but animals like birds as well.

  • “...alliteration, sudden turns of speech, high registers, changes in volume and unexpected silences in which the speaker took prolonged and exaggerated breaths” sounds more like some form of granular processor to me. Possibly called “Twitchers”, or “Twitches”. Or “VUEs”, or “Views”.

  • Perhaps the whole series of new modules is themed after Peter Greenaway films: Drowning by Numerically Controlled Oscillators? Impedance and Two Noughts? The Draughtsman’s Programming-by-Contract? The Baby of Makenoise? The Belly of a Sound Architect? The Tulse Looper Suitcases (104HP, 6U)? Prospero’s Hooks?

  • The Society for Ornithological Extermination, eh? Quite a fallaver palaver! Could the new module be called Tweets? No, that’s taken. Maybe Chirps? Pollie Fallory ~ polyphallory?? Lizards, including Flying Lizards, are polyphallic. Too tenuous? Polyphony? I’ll need to let my subconscious work on this…

  • Well, that was a treasure trove of clues.
    The language “Abcadefghan” is actually just spoken Estonian. Estonian language has unique phonemic lengths.
    Looking back to the graph posted, the breaks in the spectrogram may well be the module switching phonemes. I thought that the breaks looked Euclidean as if Oliver used a Grids to sequence the changes, but it is a bit more clear that the rhythmic changes are native to the new module 2. Of course, it looks like there are more than just the 3 phonemic lengths available in Estonian.
    Is this some kind of experiment to generate a new language?

    Supposedly early results from this module landed in the uncanny valley, so a vocal simulator – even a new language that sounds like gibberish – seems to fit that bill quite nicely.
    Voltage in, mathematically perfect gibberish out.

    I bet no one has posted that video before.

  • @audiohoarder So you think this new module can sing (with 1V/oct control) in 92 new languages? With phonemes selected by navigating around a Kahonen self-organising map, as Grids does for rhythms? And/or with a hidden Markov model or some other generative process learnt from analysis of real singing or speech, so that the ordering of phonemes sounds realistic? So, a sort of Eurorack Elizabeth Fraser (from early Cocteau Twins), or a Jónsi from Sigur Ros? Certainly that would be entirely novel, as Olivier promised. Actually, that’s by far the most satisfying explanation for the clues dropped so far.

  • @audiohoarder
    i have . . . several times ;-)

  • @BennelongBicyclist: Maybe not 92 languages, but the parameters could change the timber drastically. If there is one thing Oliver can do it is make efficient algorithms for dedicated tasks, so he could be using any number of methods or even a novel one.
    I will say that a built-in, automatic phonemic sequencer is only necessary for the construction of “words”. This could take the Vocaloid approach of having the note hold on to the last phoneme until another gate is received. I wouldn’t be surprised if both triggering methods were implemented.

    Anyway, if it still sounds “unnatural” some swing should be used on the phonemes so they aren’t constantly the same divisible lengths, and the pitch should have an exponential glide between notes. That should help humanize things.
    I am waiting on this module to replace the over used chorus in films and games. At least in horror genres. I really want to hear what this one sounds like.

    @fcd72: And here I thought I was being so original. ;)

  • OK, so the money is on module 2 being called Songs, and will offer polyphonic phonemes, more than just vowels, possibly arranged in a navigable Grids-like map, or possibly auto-sequenced by a generative process (probably the former, such that different patterns of X and Y CV inputs generates a sequence of phonemes – but clustering similar phonemes near each other doesn’t make much sense, unless the similarity metric is a Markov transition probability, such that a small change in CV plus a trigger emits a phoneme with a high probability of occurring next, and a big jump in CV emits a low-probabilty phoneme – it could still be a stochastic process, and you could play a stochastic sequence of phonemes with a keyboard or CV sequencer… huh, none of that sounds likely, but it could be a fun experiment). Not sure how the polyphony is implemented, unless it is just a chorus-like effect with all the voices in deposed and/or detuned unison, and/or singing harmonies. Probably completely wrong…

  • If I were in the business of making 1500€ modules with the goal of selling 20 of them per year, yeah, I would do such things.

    Currently, the tooling/production setup costs are such that I really need to sell 250 pieces of a module to make it worthwhile, so I am afraid the new modules are more mundane things that people might actually need in their creative process.

  • @audiohoarder It seems like we have followed Olivier’s clues up the garden path…

  • this thread. fucking hell.

  • I still think it will be possible to get a vocal-like timber out of this new module 2.

    Maybe it has an SD card slot for eBooks and it will read it to you in morse code with an erie vocal quality? That is all it does I bet. ;)

    @pichenettes. You could save a few bucks by replacing the diodes with wires. ;) I hope you realize I’m joking.

  • What does the FOX say? ;-)

  • @moofi – is that a rhetorical question, and if so, am I going to have to re-watch all three Tulse Luper films to find the answer? First I’ll have to find my Tulse Luper DVDs… Actually I’d really like to watch them again.

  • @BennelongBicyclist: He might be referring to that Norwegian earworm from last fall…
    I’m almost afraid to post a link!

    Naah, just in case:


    I must post this to balance this out….else the universe might collapse.

  • Oh dear! We’ve gone from dissecting the high cinematic art of one of the greatest film-makers of the late 20th Century to those two artefacts, both of which are equally execrable.

    But the thing that really bothers me is that this week, THERE IS NO GODDAMN CLUE!

  • @fcd72: Nice that they covered a Massive Attack classic at least! Good choice too.

  • Actually Massive Attack themselves covered it from William deVaughn)

    The Funny thing is BennelongByciclists comment about Rumer which i hear often. Like with Tofu it looks totally boring at first sight. Personally i think she is one of the most talented singers, it takes real excellence being that relaxed and laid back….

  • @fcd72 No, you’re correct, they are not equally execrable. Only one of them would qualify as a Eurovision entry. However neither belong on the soundtrack to a Peter Greenaway film. Which leads me to a request: @pichenettes, could the next new module red herring be a reference to Werner Herzog film of your choice, please? Except not the film he made here in Oz. That was also execrable.

  • Haha, forgot that Massive too covered it. Their take on Light my Fire is – innarestin.

    There seems to be lots of talk about scatological things in this thread, I just hope the new module doesn’t output the Brown Note when I play it when I’m conked out after work one fine day.

  • @Jojjelito, you’re confusing execrable with excrement – they are completely different words. If you describe something as excrement, it means it is shit, whereas if you describe something as execrable, it means it is shit.

  • @BennelongBicyclist: Exactly. I didn’t confuse them. I was just amused by how this thread took a turn into a comparative study of art, or the lack thereof.

  • I am reminded that some take a literal approach to the laboratory development of new synth modules – this project has today announced interim results – scroll to end of this MW thread

  • @ BennelongBicyclist

    Actually I was referring to Oliviers comment “FOXes” in relation to that birdy imitation and vocal animalsounds in general like it could be a device named FOX related to an alteration of VOX into FOX while FOX crosspointing at Ylvis posed question as if this module could deliver an audible answer ;-)

  • More clues from Olivier:

    • Continued development of new module 1’s firmware. 3 out of the 4 DSP algorithms available on the module are finally giving results I’m satisfied with; I’ll tweak the last one or replace it with something else.
    • Wrote a DSP algorithm which will be used to extract from samples an interesting bit of data available in new module 2.

    And logos:

  • Streams?

  • Schwurbels.

  • I initially thought “Swells” for the top one, but that doesn’t fit very well in the line-up given Peaks and Tides. More likely those are overlapping granular envelopes, but what is the noun? It could be Samples, but that seems too direct and literal to be characteristic of Olivier. How about “Seeds”?

    There is almost certainly a reverb/delay module amongst the 4 new ones, so my guess for that is “Spaces”, but “Streams” seems to fit the pictogram better, unless those are reflection off walls (I doubt it).

  • then i’d rather go with “Schwurbels”. “Swells” is not sound like a good name, then it might as well be called “Pimples”, who’d want that? Or it is called “Swindles” and there is no new module, just a blind panel?

    but “spaces” and “streams” sounds quite reasonable … at least a possibility.

  • “Swindles” should be the name of the eventual MI blank panels with MI matching logo.

  • Olivier has ruled out a general-purpose, field-programmable module, thus the names probably aren’t “Sounds” or “Signals”. In fact, there is quite a long list of plural nouns starting with “S” that are probably not names of the new modules. Sorry, I don’t have time to list them all here.

  • And the second module name also seems to begin with an S. ... That’s probably a compressor distortion module right? Stomps?

  • @shiftr – ah, yes, the pictogram depicts dynamic range being squashed, and extra harmonics being introduced?

  • I think you are right about the pictogram.

  • Can’t wait to discover all these :-)

  • Did someone already say Slopes for the first one? Looks like slopes to me.

  • The latest clues from Olivier, and they are a lot less obscure this time:

    • Implemented new features in module tester for new module 4 test procedure.
    • Analyzed a bunch of samples for extracting interesting model parameters for new module 2.
    • Wrote bootloader/firmware updater for new module 2.
    • Wrote ADC multiplexing and other I/O routines for new module 2.

    But there’s more! A soundbite!

    So that is confirmation that one of the new models is a reverb, and it might even be called Rooms. Could those rooms have variable numbers of axes of symmetry, as well as variable size and other characteristics, I wonder?

    And there’s even more – some numpy (numeric Python) code – how rusty is your linear algebra?

    OK, so he’s sampling a binary number of bytes along the first dimension of presumably an audio sample array, then transforming that into a Hankel matrix and doing a singular value decomposition on it to characterise it (um, I think – I haven’t written out exactly what the code does). For what purpose, I have no idea, but basically he is data-mining audio data to derive features which could then be used to identify, catagorise, cluster, or possibly even partially reconstruct some characteristics of those samples. Um, I think (my maths is rusty, and my knowledge of data mining of signals is non-existent).

  • Hmmm… I’ve probably got all of these modules mixed up at this point, but the one with the bokeh parameter being a reverb rings a bell. The ability to mix two reverb algorithms into one?

    In the demo it does seem as if the reverb parameters are changing, but are frozen for each impulse. That would explain why there was a necessity for polyphony. The big metallic hit at 16 seconds has a large hall style reverb on it, but the other hits have no where near as long of a reverb tail.
    The sudden transition from long reverb to short muted reverb at 13 seconds is also of interest.

  • Of course, that sound clip may feature two modules – a reverb module, called Rooms, and a physical modelling synthesis module, called Things.

  • You guys know that you are the secret product development squad, don’t you?

  • I’m quite sure some of the ideas for modules come from derailed discussions on this forum.

  • Yeah, that’s the new R&D process at Mutable Instruments: bang on random cans and plates in a warehouse, record and post on soundcloud, and then let people come up with module ideas from that.

  • I’m surprised most people on this forum haven’t yet figured out that Olivier Gillet has retired to the Bahamas and that Mutable Instruments is now fully being run by an AI that uses Markov chains on the forum to generate designs which get sent to the CM automatically.

  • Stranger things have happened – for instance the Plan B saga… just saying :)

  • @fcd72: Really? Well one of the modules is definitely a K5000 filter clone. Let’s call it Shrieks. Now that we have figured out one there are 3 to go! ;)
    On a side note, I don’t think that I’ve ever heard of a digital eurorack filter. I don’t see why no one makes one…

    With all of these clues and speculations I have definitely lost track of what I think each module does.
    I will say that making a dedicated reverb module that only does reverb seems risky because reverb is really a love it/hate it kind of deal. So I am sure there will be more to the module. I have a great BOSS multi effect rack, but I hate the reverb in it, and it is very flexible. Well, only the one algorithm with early reflection and EQ controls, but more flexible than several software reverbs. Good thing I can turn it off…

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion