LUVORATORRRRRY - KMNZ

Published: Apr 3, 2023           Updated: May 15, 2023

I made this:

which is a rendition of this:

with my current obsession - KMNZ, who are by their definition a “virtual hip hop girls duo” . Future bass is gateway vtuber tunes. To be clear, I made the video, not the song. And to be doubly clear, the full body artwork is not mine.

This post is a diary, post-mortem, deep dive, retrospective on the making of my video. I don’t really intend to focus on one specific aspect of the production, just sometimes you want to rant about the flowers you saw along the way, you know? I’ve released all the project files, which are available here.

Monologue of the Beginning

Per usual, one day I was wading through the waters of The Youtube Algorithm. Like the two to three weeks before, the playlist consisted of my newfound musical interest, KMNZ. “Give me everything”, I said because I’m blinded by adoration and none of their music can do wrong despite my dislike of the vocaloid and high-pitched baby bubbly j-pop of which KMNZ partially - or arguably wholly - consists of. Anywho, the bangers banged on and when particularly bangin’ bangers occurred, I would search the original song to answer the question: “Do I really like KMNZ? Or do they just cover good music?” The answer is usually a little of A and a little of B, although that B is getting scarily large as the tubers continue to invade my KMNZ streams. After a few replays of LUVORATORRRRRY, I investigated the original and saw various other tuber covers. “KMNZ should have one of these”, I thought. At the time, by these I didn’t particularly have anything in mind but I made this: chibi liz draft Ehh I mean… this! chibi liz So cute! And then animated it to do this!

Eee! Sooooo cute! Then it dawned on me, if I was truly a BIG FAN. I’d need to make a 3 minute 25 second video. Ugh.

Chibbs

So yeah, it all started with a drawing of some chibi thing from the original video. I’d say I was a bit brash in my decision to learn/use Live2D. I basically only knew the software by name and didn’t investigate alternatives. Although, kind of cool of past me to just jump in and try it, yeah?

Workflow

  • Create artwork in Krita
  • Export to Photoshop(.psd) using this Krita plugin I wrote
  • Import said psd to Live2D
  • Rig & Animate
  • Tweak animation sizing to fit within the free export image size of 1280x720 or 921600 pixels
  • Export images from Live2D because the free video export can’t export alpha channels
  • Compile all the images into a video using ffmpeg -framerate 30 -pattern_type glob -i '*.png' -c:v ffv1 -pix_fmt yuva420p output.mkv

I’d really love to automate the latter half since if you ever update any of the visuals of the model, you have to reexport every animation. Which is especially a bother because I have an odd compulsion to fix stray pixels. By stray pixels I mean any pixels that are left outside of the drawing’s silhouette, mostly because I don’t properly draw within the lines. I’m more of a “draw the lines and erase the not-lines” kind of guy, if you catch my drift. Anywho, fixing these stray pixels are the worst return on time investment ever.

Live2D

Live2D - pretty good bit of software - kind of ugly though and workflows are obtuse and disorganized.

e.g. To have any form of alpha clipping you have to enter an object’s ID into a text box, which is also apparently comma delimited, although I didn’t have the need for multiple images as a clipping layer. The worst is when controlling a single object with multiple parameters there exists keyframes for every combination of parameter values. live2d rant If I wanted to modify a keyframe on the “Shoulder” parameter, I’d have to modify the keyframe 3 times, one for each keyframe of the “ElbowL” parameter. There exists a multi keyframe editting dialog buried in the top menus, which makes this a slightly non-issue.

Actually the real worst is that I don’t know a way to tell Art Meshes to point to a different origin of the underlying Model Image. So there’s no way to share a Live2D rig between models that aren’t the exact same size. Sucks for me because I decided to make my chibi drawings some level of “canonically sized”, so I couldn’t share the animations of KMNZ Lita with that of KMNZ Liz. Also when I originally created my animation, my .psd had a strange 20px top buffer. Now I’m stuck adding a 20px buffer to every export I do of this particular animation 😅.

The list goes on and on and there’s room for innovation in this space especially regarding UX.

Symbols

001.symbols is the first scene; I’m just kind of winging the scene names here. Renaming scenes is also trivial in Blender, so some names just organically come up.

Pixel Art in Blender

I had an interesting/terrifying approach to work with pixel art in Blender. Using a pixel brush and a small 10x10 canvas in Krita, I’d create the pixel art. I’d export these images as png (with alpha) and then convert this image to an svg file using a program called pixel2svg. Blender has builtin support to import svg files, however pixel2svg generates a curve for every pixel. So when importing the svg into Blender you get a collection of curves for every pixel. To properly work with the curves you have to join them into a single object in Blender.

I wrote a blender script/plugin to import svgs into my specific project. In addition to joining all the imported curves, the script automatically links the material on all the newly imported images, and scales them (and applies said scale). It is tightly coupled with my project because it uses a hard coded target object to copy, but it shouldn’t be too difficult to point to something specific for another project.

A wise colleague once told me to use the right tool for the job. And I clearly have not learned that.

That plugin also includes render-viewport visibility toggle commands that I used extensively in the project. One command copies render visibility of the node to viewport visibility and another makes all nodes visible in the viewport. This allows an alternative to “moving things out of frame” for making items visible on screen. I honestly wouldn’t recommend using this workflow because when you muck about with animated viewport visibility, it becomes hard to work with objects because they become deselected when moving around the timeline. Additionally sometimes text nodes bug out and don’t become visible until you open them in edit mode. I don’t know though, the decluttering becomes really helpful with the pile of text nodes I have running around.

Start Game

start-game To model the giant font text, I created the curves in Inkscape then simply extruded said curves in Blender. I probably could have done it all in Blender, but I didn’t know how to use Blender’s curve Edit Mode at the time. Rotating a curve handle using Blender’s rotation hotkey? Creating new anchor points be extruding existing ones? This workflow is crazy.

I also created curves for the… glowing insets? on the letters. From the curves, I produced a “wire” and did a simple boolean operation on the letters. Naturally you’ll have to cleanup some faces, from the boolean operation, but eventually you’ll have a few faces where you can set an animated Emission shader. Basic Blender stuff, it’s what we all signed up for.

Blender glares and flares SUCK. I went through a bunch of iterations to try and get the effect: modeling a flare and compositing the sh*t out of it (That’s actually what the left flare is in this scene) to having a dedicated scene with a plane and camera bloom configured just right. I even considered this non-free plugin, which is pretty huge since I hate spending money. It’s like: “$39 or 39 hours of my life”. In the end, I just went with a free prerendered video of a flare, scaled up to the screen size and animated.

Walk Scene

Which is simpler, fixing the object or fixing the camera?

The problem is EVERYTHING has to be a child of the camera, I probably won’t do this again, but who knows what the future holds.

Hey look a bug in Blender (I think)! Anything in “Alpha Blend” mode is invisible in the cryptomatte pass. Which is troublesome because not being able to composite individual objects and having Z-layers defined by render layers makes life reeeally difficult. Yeah, you can animate how you composite the render layers, but compositing is already a drag to work in and reason about. The alternative “Alpha Clip” mode is ugly 😊.

Oh geez, I think found another bug. If you use a Line Art modifier of a grease pencil on a mesh that isn’t on the current render layer and try to render, blender crashes. Something like that… I can’t exactly reproduce again anymore although I had previously done it three times before.

Mesh World

This was the first time in the piece I used pretty artworks! Lovely! Cropping the official KMNZ artwork was a real pain. Is it Krita? I constantly had lightly transparent/stray pixels that would show up starkly in the Blender environment. I feel like I’ve erased the outlines and redrawn them at this point.

I used AI to upscale the image some unknown amount. Stuff is pretty good.

Also, I can’t get over the fact that these look like meat clubs: meat-clubs

Ahhh

Did I mention, Blender glares/flares suck? Here’s the compositing node graph of a basic flare:

Blender node graph for compositing a flare

And it doesn’t even look good, I ended up using a static image from unsplash.com (With a decent amount of post processing)

Dance Break A

dancebreaka This was the one part that I had to design myself. I went for a vibe similar to this section of this video (Sorry can’t embed timestamps). I’m a real fan of the inverted background masks paired with the “WA” text, which as a result also inverted every other measure. Marpril’s production is really good. Anyone know why that video turns to 12fps when the camera is zoomed? I couldn’t really do the endless tunnel effect though because it created too much geometry when I went with the simple route: noisy generation of cubes distributed on the faces of a single long cube. Which I still ended up using, just in a loopy fashion.

I reused a gradient flourescent bulb shader from a later scene and threw it on some procedurally generated “pcb route” type things. I learned a lot about Geometry Nodes that day.

After my scene was done, I was so excited and my heart beat so fast. It was really hype and really great. Although I still question whether my views on “what looks good” are valid. I’ve spent a few days away from the piece and I come back and… it still looks cool to me. Honestly, I’m not sure how much of it was what I wanted versus where I let the tools I could work with take me.

I find the brightness of the wires attracts the eyes, so that a viewer might miss the symbols that flicker in the center. The white rectangular noise tunnel is a bit ugly. When I brighten the distant wall it becomes too obvious how short the tunnel is. The wires don’t converge into the distance. I couldn’t blur the distant silhouette of the cube tunnel to imply depth of field because the wires were on the same render layer. I forgot the reason I didn’t separate them.

Babby’s first music video. Sue me I suppose.

Robots

robots I had an odd aversion to redrawing the robots in this scene. So I 3d modeled it and used a Line Art Modifier to get an outline/drawn effect. This workflow crashes blender to no end. I suspect it has something to do with render layers and the Line Art Modifier but I can’t reproduce. botyard

In this scene, I duped a bunch of robots but I didn’t use linked duplications, so when I had fixes on the meshes such as the little outsets on the top of the tv it wasn’t applied to all meshes. I just fixed the meshes where this issue was visible hehe. Also this scene had so many vertices that blender crashed occasionally when loading it, so I separated it into its own render layer. Or it was because of the Line Art Modifier, who knows.

Also the background is a fire simulation, the details of which I couldn’t get right. It really bugs me, I don’t like it. It’s ugly. Additionally, I didn’t apply this trick where you fade in the start of the simulation near the end of the simulation to make the loop seamless.

Dancing

dancingscene Bad name. Cyclic movies don’t work with the “offset” value. First “offset” frames are skipped on replay.

Nuts and Bolts

nutsandbolts Here’s how I modeled these vtuber tropes.

  • Grab STEP files from McMaster Carr
  • Import STEP file into FreeCAD and export as STL
  • Import STL file into Blender
  • Reduce vertices by applying some Simplify modifiers

Done and done! I… hope that’s not illegal. And don’t you dare challenge my ability to model a screw. But do challenge me on making topologically sound screws.

Disco

disco Because of this weird disco ball thing? light-sphere

Seriously what is that? Because of the resolution limit on the free version of Live2D, I tried Inochi2D, to do the full body animations in this scene. One week before I tried Inochi2D, it had released support for exporting video of your animation; Which is exactly what I needed! Unfortunately, it didn’t work at all for me so I gave up. I probably could have asked questions in their Discord, but I don’t like internet interaction.

Already Inochi2D is doing some things right compared to Live2D, such as simpler parameter creation and interpolated keyframes. It’s still rough though. Basic operations are completely undiscoverable and hidden behind various click combinations e.g. shift-click, right clicks, and middle clicks.

I ended up using Blender’s Lattice modifier which worked out quite nicely.

Previous Video Editor

Hi, previous video editor. It’s been 9 years since you’ve looked at this. Maybe you’re dead but…

I’m pretty sure I found the exact same font that you used: Misaki Gothic. But your pixels in the 8x8 font are slightly larger than mine. font-meme I think this is the solution to allow the font to work in a 3d context because the original curves of the font are all broken and don’t properly resolve in Blender and presumably any 3d software. I couldn’t figure out how to do this myself though, so I ended up manually fixing the meshes… How did you do this, my man? Alternatively there’s just not a lot of variation of fonts in 8x8 japanese characters, so perhaps I found an exact dud copy. But no amount of font fixing of a 8x8 pixel font will generate this type of dilation effection so that particular nuance must be something in whatever editor you’re using. Probably a real video editor; Blender’s own video editor properly renders fonts, versus the Text objects I’m using in 3d scenes.

There’s one scene where the character’s hair bow is clipped. clipped Tell me your life story, I’d love to hear it.

You did in fact modify/tweak character, line, and word spacing throughout the whole piece. You even tweaked the spacing between individual characters. I see you and I admire you.

Spin

spin Because the scene starts with a spin from the last scene 😛.

Smoke

Smoke/fire simulations are by far the worst part of this whole thing. My gosh, I’m going to have the most terrible rant of every papercut I’ve experienced along the way. And then I’m going to feel so much better for having the experience realized and released. And maybe I’ll contemplate my life choices for a bit. So here goes…

When the forces are too strong and the domain size is too constrained, sometimes the simulation falls apart. This reality collapse also might just happen when increasing the simulation resolution. So maybe you’re simulation looks fine at lower resolutions, but when you try out your final bake, it just blows up. To add insult to injury, when this occurs the bake time becomes absurd. This also means using bounds collision on the domain is not possible. Rebaking the same exact settings sometimes creates a completely different result. And I don’t mean my noise got a different seed, I mean something that looks akin to an unapplied scale - my smoke has become x3 larger for no apparent reason. Maybe I just ran into some saving issue, because sometimes when I load my file, the XY coordinates of the bake have gone completely off the rails. Guess I’ll retweak all my force settings to conform to this new found physics system. Sometimes your wind effectors stop applying and you have to recreate the domain. I’m pretty sure negative vortex inflow settings isn’t a thing, I can’t for the life of me get the force to push fire away from the center.

I actually can’t render one of my scenes in its entirety - a lengthier smoke simulation being the culprit. Just kidding! It was a bugged smoke bake… maybe? The simulations take so long so it’s hard to truly refine the visuals on a piece. Especially when you’re trying to go for a look with higher vorticity since those small vortices will only appear at higher simulation resolutions. Not that it matters, I don’t really find the higher resolution simulations feasible, mostly because Blender’s fluid simulation backend has no GPU support. Okay, I’ll run a simulation over night, but I’ll come back with it half way complete and using 43GB of hard drive space. Yeah, yeah, 8 hours is nothing for fluid sims (I assume), but 8 hours is a lot of MY LIFE. Also the Viewport is completely unuseable now because of the preview of the simulation. Honestly, there’s probably a setting for this one.

My next physics simulation belongs to Houdini, which is apparently industry standard, or I’ll be like everyone else and use some stock footage. Although… I’ve gotten pretty good at setting up Blender simulations. Gosh I’ll never be able to get anything done if I have to make all these effects myself… Look. Look at my beautiful node graph for compositing chromatic aberration: Blender node setup for chromatic aberration

Finale

finale Curse that split screen effect and don’t look at the rat nest of fades I have to make it work. Blooms don’t work when you have different render layers 😢

For the background, I made use of this very good tutorial. The high level summary is: use the particle system, generate a point cloud by exporting and importing as a .abc file, use geometry nodes to convert the point cloud to Blender points and apply a material. They look very pretty!

So much effects and particles in the original video, it’s blinding and complex.

My compositing tree is so huge… it takes forever to render anything on this scene.

The End

It’s so beautiful and I loved every second of working on it.

Since I’ve been gone… Blender has released 3.5 which supports real-time compositing and GPT-4 was released.

I really wish I had configured all my render outputs to be more than 1080p. It’s 202X for gosh sake!

Update: And KMNZ never even noticed me. Waaa. Oh well, I’m too big to fail anyway.