February 2019 Summary


  • Got quotes from Windstream and Nextlink on better internet connection – decided on switching to Nextlink as soon as we can afford transition costs
  • Learned how to use Ardour, developed workflow for mixing
  • Created Audacity, Ardour, Kdenlive stubs for all of S1E01
  • Made contact with Raziq/Kiriyo Studios for possible
  • Mocap for Blender training deal – planning to visit in March
  • Finished the LA-Launch animatic
  • Wrote “Copy NLA” script and start “AX” add-on for Blender (skeleton for future utility scripts).

Feb 20, 2019 at 4:01 PM

Screencast: Appending Character Animation

Keneisha Perry shows the process of adding animations to a scene, using the Non-Linear Action (NLA) editor, from existing action libraries to a shot file.

This is from the “Touring Baikonur” sequence in the pilot episode, “Prolog”.

We’ve been hoping to make extensive use of the NLA to re-use animation.

Keneisha also recorded this video about fixing broken proxies:

Feb 18, 2019 at 4:01 PM

Re-Assembling Sequence “LA-Launch”

I’m continuing to work through assembling a new GL-rendered animatic from shot files. In principle, I’m just copying my earlier animatic, but with shots rendered from the actual animation files as I set them up for finishing character and mechanical animation. I’m also animating the mechanical and camera direction as I go.

In practice, as new shots get pulled in, I see opportunities to improve the cut and so I’m also doing some editing changes.  I think this sequence reads better now.

It’s also a bit shorter because I moved the end of it (Including the actual launch, ironically) into the next sequence, in order to merge the files a bit more logically.

Feb 12, 2019 at 4:01 PM

Worklog: Assembling Animatic Sequences

This log video (for 2019-02-11), shows me rendering audio stems from Audacity, assembling the Ardour mix files, exporting the audio, and assembling the Kdenlive edit files for sequences in the pilot episode (“Prolog”). Of course, the video here is mostly GL animatic renders.  There are still numerous problems to solve, obviously, but things are starting to come together.

Feb 11, 2019 at 4:00 PM

Audacity “Stems”

I’ve been reorganizing the soundtrack assembly for our episode to match the plans I drew up earlier, and to integrate Ardour into our workflow.

Essentially, each sequence gets its own Audacity project file, which will be the “original” for the audio in the “source edition”. These files contain all the original sound clips in a way that allows them to be accessed individually, and some sounds created inside the editor.

These are then output to “stem” files, of approximately the same length as the sequence (In fact, I’m making sure they all start at the beginning of the sequence by inserting a “silence” clip right at the beginning that is included in each stem — this makes sure they will be correctly aligned if I line up their start points.

Unfortunately, this step is manual — I have no way to automate the generation of the stems from Audacity. That will require scripting abilities that I think this version of Audacity (2.2.1) doesn’t support, although I hear this may be coming in a future version.

These are the inputs I’ll use to construct my Ardour project files, which will then be rendered, and that output will be incorporated into the Kdenlive project files for each sequence, as well as the Ardour project for the entire episode. The sequence project contain the “diagetic” or “source” sound from each sequence, such as voices, sound effects, and ambience.

I also created the stems for the entire episode project file, which includes the “non-diagetic” sound: such as soundtrack music and voice-overs.

Feb 6, 2019 at 4:00 PM

Worklog: Learning to Use Ardour

This is the first worklog screencast I’m willing to post, because it’s production work and doesn’t have any confidential data in it (I probably won’t post these regularly, anyway, because it would be an overwhelming amount of video, but I thought a sample might be appreciated).

It shows my entire workday (well, the part of it I spent at the computer), sped up 30X. On this day, I mostly spent time trying to figure out how to do basic audio mixing tasks in Ardour. I also created audio “stems” in Audacity to use in my Ardour project, and I posted a brief image post about what I was doing.

Naturally, part of the learning process is doing searches and reading the online documentation for Ardour.

Some things are confusing going from one program to another, just because the jargon is different. I spent quite awhile trying to figure out how to “keyframe the volume” of a track (what it would be called in Kdenlive), when it seems what I really wanted was how to “set control points in the automation curve of a fader”, using the “touch” automation mode.

One thing I have not yet figured out is a workflow for setting up a filter with pre-set parameters and then turning it off and on with the automation — I think it might require using “busses”, but I haven’t got there yet. I settled today for just fading the ambience tracks in and out when needed. Filtering will probably be my goal tomorrow.

I’m still far from fluent with Ardour, but I think I made good promise on my first day to sit down with it — Blender was harder to get started with than this was. Prior to this, I had only read a few tutorials and watched Rosalyn do some limited mixing with Ardour (but she’s pretty new at it, too).

It’s a much more sophisticated piece of software than Audacity, but a bit harder to use because of that. It also is a very different workflow, requiring a different mental model about what you are doing.

You can see that what I’m working on is the “press conference” sequence: it requires a lot of audio mixing work, since I need to combine two different ambience tracks and two different versions of the dialog, and it needs to be tightly synchronized. I felt this would be a good test for the software (and my skill with it). If I can mix this sequence, the others should be easy.

The video here is the sequence animatic, edited in Kdenlive.

The screencast music is “Void Sensor” by ZeroPage, under a CC BySA license.

Feb 5, 2019 at 4:00 PM

First Day with Ardour

Today I am learning how to use Ardour. Prior to this, I’ve only watched Rosalyn do it — and then only a tiny amount. Today’s goal is to simply get the “stems” from Audacity loaded into Ardour and synchronized with the video-sync track.

Although Ardour is for audio, not video, it allows you to load a video track as a reference source to get the timing right.

The plan is to mix each sequence* of the episode as a single Ardour project, with sequence-lengths stems produced in Audacity, and then apply filtering and level adjustments between tracks in Ardour to produce the source (“diegetic”) sound in the sequence (but not the non-diegetic sound, like the soundtrack music, which will be added at the episode level).

But this is my first attempt to actually set this up. Should be educational.

*If you have cinematic experience, you likely think of what I’m referring to as a “sequence” as a “scene” (as used in drama in general). The reason I try to use the word “sequence”, even though it’s a bit of a stretch from the cinematic meaning, is that in Blender, a “scene” means the 3D environment in which shots are constructed (or the associated data block in the file).

If I were to use the word “scene” in the cinematic sense, I would have two separate levels in the source hierarchy named “scene”, and it would get confusing).

Avatar photo
Terry Hancock is the director and producer of "Lunatics!" and the founder for "Lunatics Project" and the associated "Film Freedom" Project. Misskey (Professional/Director Account) Mastodon (Personal Account)