📔 Introducing the Warudo Handbook—your ultimate guide for everything Warudo!
We rewrote our documentation from scratch and... accidentally added a bunch of new content?! ✨😳
Starting from:
👉 12 step-by-step blueprint tutorials! From spinning yourself on a chair... 😵💫
Another important issue I rarely see people discuss: Live2D is a proprietary technology. If you have a 3D model you can use it pretty much everywhere without limitations; Live2D models only work in Live2D-licensed applications. Also…
People tend to think
#Warudo
is developed by some unknown, mysterious entity, but the entire
@hakuyalabs
is literally just me (dev) and
@YumemiyaYoyu
(everything not dev), and we're both VTubers for years! 😛 We just created the 3D VTubing app we wanted.
This is honestly super cool even from the developer’s perspective
This really shows what you can accomplish with only blueprints!! No scripting, just dragging nodes 🥹✋🏻
#WarudoPro
🥤Pirate Boy VTuber
#ShiratoriAsuta
from
#hOuOu
(Bilibili: 白鸟Asuta_Channel) 🏴☠️
✨Brings you a video full of rich interactive features, from his latest 3D debut livestream event!🎇
🕊️Come and enjoy these fun and engaging moments with Asuta and his seagull “Pine”!🍍
It's been 186 days since Warudo's EA launch.
Here I am sitting on a bus back home, haven't slept since yesterday to forcefully fix my terrible sleeping schedule, and scrolling
#Warudo
tag (as always) viewing your enthusiastic responses to the latest "Live2D" physics update... 🧵
🎄 My cover of "Merry Christmas, Mr. Lawrence" from last night's stream. 😊 Merry Christmas everyone!
(Hand tracking here is completely based on MIDI input!)
#Warudo
#VTuberUprising
Making free stuff is hard. If you don’t spoonfeed every tutorial some people will just assume what they’re trying to do is impossible, leave you a negative review purely to vent, then disappear. You don’t even get to defend since they disabled comments.
Jaw dropped. I fully appreciate all the technical challenges & novelties involved in this, but I think Live2D rigging is time-consuming & expensive enough, and it is much more difficult to make 2D models look 3D than to make 3D models look 2D
VTS Fullbody Tracking - plugin for
#VtubeStudio
It can track: shoulder, elbow, knuckles (thumb, index, pinky), wrist, hip, knee, ankle, heel, foot index
Try it and Join the Alpha !
Download & Informations on Github:
#VTSFullbodyTracking
#Live2D
#VTuber
…To create an app that can load custom Live2D models, even if the app is free, you need to “share revenue” with Cubism. That’s insane.
This doesn’t bother 99% of the population I guess, but something makes me uncomfy about 2D VTubing tech effectively monopolized by Cubism Inc.
@YumikoVT
Warudo dev here! Let me clarify a few things:
1. NiloToon is not free to use! The code you saw on GitHub is a "lite" version Colin created for educational purposes. All the fancy screenshots you are seeing do not come from the lite shader.
Have a very exciting
#Warudo
idea but don’t have time to write it right now. Oh well. Let’s hope I can squeeze some time in April and show y’all a prototype… 😙
an OMORI manga adaption will be serialized in Kodansha’s seinen magazine, Monthly Afternoon, illustrated by Nui Konoito-san [此糸 縫(このいと ぬい)]. the manga will be made for both longtime fans of the game and for a new audience experiencing the story for the first time.
@Cimrai
Warudo dev here! Both programs are similar in a lot of ways. You can pretty much migrate any setup from one to another. So - it never hurts to try both! 😊
If you're going to try Warudo, check out the blueprints we shared that allow you to do this 👇
And just a few hours ago, I saw this clip from Remu (@.Remuchii_). I genuinely could not tell if it was Warudo or even if it was 3D at all, until I saw the idle animations & environments that I'm way, way too familiar with.
Then, after putting together a fancy trailer, we publicly launched in July 2023. And yes, this obviously-inspired-by-"Everything Everywhere All at Once" bit was also filmed by SLM himself.
2021 marks the year when I finally started to treat 3D VTubing as a technical problem to solve. There were already a few great apps out there like VSeeFace and VMagicMirror, but I wanted to play my piano.
So I spent my intern salary on a Rokoko suit and worked on this:
For the sake of branding, I always used "we" when talking as
@hakuyalabs
, so it's hilarious someone thought we have a HR person running the Warudo account. That HR person is me!! 😭
Maybe I should just reply with this account more often...
@YumikoVT
Warudo dev here! Let me clarify a few things:
1. NiloToon is not free to use! The code you saw on GitHub is a "lite" version Colin created for educational purposes. All the fancy screenshots you are seeing do not come from the lite shader.
I don’t get why people get upset over this. Freelancing still requires responsibility - if you can’t deliver something, you shouldn’t get paid.
I’ve always trusted artists and paid full upfront, but that did result in two illustrations that never got delivered after 4 years. 🥲
After lots of feedback, we’ve reevaluated the need to enforce a 150 guaranteed delivery.
Instead the guaranteed delivery date will still be mandatory but the limit will be 2 years.
The VGen refund protection period will still be 150 days after each payment.
This means if a
I guess my point is: I'm happy that this isn't a lonely journey anymore. Maybe there aren't that many 3D VTubers in the world, at least not very soon, but we now have a community that is constantly exploring, and expanding, the boundaries of 3D VTubing.
Then I started looking into 3D. I soon realized once you jumped out of the 2D canvas, there's many things you can do in 3D - in fact, too much.
I'm not sure if this even counts as VTubing, but here's a fun project that I made to play piano in VR with a character playing violin.
@YumikoVT
3. Therefore, to make this more affordable, we actually partnered with NiloToon to offer a special VRM -> NiloToon service, which allows you to convert your model into NiloToon for use in Warudo Pro, *without* purchasing the NiloToon shader.
So it's quite the opposite!
It feels like a long, long dream. I was into VTubing tech for a long time - my first hackathon project in 2018, a freakin' Chrome new tab extension, literally threw in a Live2D character just because.
That's when I saw the blueprint system in Unreal Engine, and I was like: hey, how about we create a game engine for VTubers?
So the first version of Warudo was born. By the way - "Warudo" was going to be a temporary codename, but I couldn't think of a better name, so it sticked.
So, to be honest, when VNyan came out in November 2022 and also adopted a node-based approach, I was super happy and felt validated. I hope Suvi doesn't mind me saying this, but I truly think great minds think alike! 😉
In July 2022, I started inviting VTubers to try Warudo...
Then I remember the time when Esprite and Koko saw the potential in Warudo's blueprints when no one were playing with them. The time when Kana casually dropped 20 Warudo tutorials while only programmers can interpret our way-too-technical docs back then.
Two years later, COVID hit, so I decided to become a VTuber (I'm sure many of you can relate). Back then everyone used FaceRig, and as pioneering it was, it was simply too barebone. So I started working on my own 2D VTubing app...
But the more I tried to work on it, the more I realized how difficult it was to implement features that doesn't interfere with each other.
Anyone with basic Unity knowledge can write a 3D VTubing app that fits their needs. But making it work for *all* VTubers is another story.
@YumikoVT
2. To use NiloToon, you usually have to purchase the NiloToon shader.
However, NiloToon is more often sold to companies (e.g., Hololive) for their internal Unity projects, not indie VTubers, so the cost can come off as really high if you just want NiloToon on your VTubing model.
@YumikoVT
4. Also we're not the only app with NiloToon - for example Charm (used by
@zentreya
) also supports it.
If I have to guess why there's so few apps that support it - it's just harder to integrate with another rendering pipeline, and most users are less interested in a paid shader.
It was quite a lonely process for the first few months! My friends knew I was working on this, but none was convinced that 3D would enter mainstream - there were so few 3D VTubers on Bilibili, the platform I stream and socialize on.
Also- "Aren't nodes too advanced for VTubers?"
So yeah, making free stuff is hard. Most people are nice, but at the end of the day somebody will judge your work based on their minimal or misguided effort. Still a long way to go for me to fully accept that, I guess.
...adding more and more features as they were requested, while ironing out accessibility issues to make sure everyone can enjoy Warudo.
It became my second full-time job. From July 2022 to July 2023, I pushed 222 updates to Steam.
The times when I saw a VTuber I knew using Warudo in their streams. The times when SLM and I both caught a cold while working towards the deadline for a concert. The times when we release a long-awaited feature and everyone commenting "YESSSSSSSSSS" under the tweet.
This along with a few other attempts eventually led to a revelation moment in 2022:
3D adds a new dimension, so a 3D VTubing app also needs a completely new paradigm. When you're 3D, people expect you to do all sorts of things that couldn't be done in 2D...
...and later at some point, I even reached out to Denchi to ask if I could contribute to VTube Studio before it was even released!
Eventually, I gave up on developing my own 2D VTubing app due to Cubism asking me to "pay a cut" for an open-source 2D VTubing app. No thanks.
Not much has changed on the grind though. In the time span of 186 days, we released 138 updates for Warudo. You may have noticed sometimes Warudo got updated without a changelog or version bump - that was usually me pushing a hotfix and being too lazy to write a changelog. 😛
...whether it's one of those fancy concerts from Hololive, or getting launched into the space by a redeem like CodeMiko.
There're way too many possibilities to make all of them as built-in features. VTubers need to be able to create their own features.
Especially frustrating when it’s clearly a nontechnical user trying to use an advanced feature and decided to not look at any tutorials, not asking in the Discord server / Steam forums, and just decided “blueprint bad” because they didn’t understand it.
Some outreach wasn't successful. Some new features weren't used at all. Rationally, I know this is completely normal; no app's perfect, and Warudo is still young. But emotionally, Warudo has taken more than 1/10 of my lifetime, and I selfishly wanted more people to appreciate it.
I am also fortunate to have SLM and Nekotora to join on the team this year. I could focus more on the core product itself, while almost everything else from graphic design & marketing to accessibility improvements were taken care by them, for which I'm forever grateful.
Obviously, if Warudo Editor is like this today, Warudo would never have more than 10 users. I soon realized while customization is important, I still wanted my non-programmer friends be able to use it.
So, the "Assets" page is born: a more traditional way to tweak your settings.
The times when Hoshi and ZeroSkyes answered questions faster & better than I could. The times when everyone proudly shows their best moments with Warudo in the
#showcase
channel. The time when I flew to Shanghai to supervise a VTuber concert using Warudo with optical tracking.
.
@AniLive_app
This is egregiously wrong. No, you do not need PSD/cmo3 files to animate a Live2D model, unless you don't use the Live2D SDK (which is literally impossible). I struggle to find a *single* reason why raw files are needed at the first place. Please explain.
@KiraOmori
Thanks for the mention! We definitely understand your concerns.
AniLive doesn’t use VTS, so we ask for the files to adapt models to our in-house software.
We use them for the sole purpose of importing + making avatars move on the app!
More info to follow w/o X character limits!
It never, at once, felt tiring though. The incredible feedback from the community constantly encouraged me to "add one more thing" at 5am in the morning.
(Disclaimer: This isn't good practice. In fact, I added too many features that I didn't even have time to document about.)
@Cap039
@AniLive_app
heya! so from what I'm gathering ani live sends the whole model to the viewer? 🤔 sorry the more I learn about the app the more concerned I'm about it. I want to give you guys the benefit of the doubt but theres a lot of concerning stuff around
(source )
Fortunately, a long vacation spent on rewriting Warudo's documentation was all I need. While writing the tutorials, I slowly came to appreciate all the effort we spent in the last 2 years. Boy, expressions weren't even included in the character asset back then!
The time when Feline was streaming coding the Adventure Items plugin and I joined the live chat to help debug. The times when Veasu quietly uploaded another super powerful plugin without reasonable amount of documentation from my side.
Are there mistakes that I regret? Yes. Basic tracking options like lip sync should be easier to configure. Implementing webcam multicasting was a terrible decision that created more problems than solutions. Too many "advanced" options that no one use.
@gelisor
I have exactly experienced this - Live2D rigger ghosted me when I asked to purchase the original project files for doing some model upgrades, so I had to commission another rigger to redo the entire rigging. 😰
It's not all sunshine and rainbows though. I was anxious for the past few months. From an November user survey, people praised Warudo's functionalities but still thought it's hard to use. Meanwhile, we had a lackluster documentation that hadn't been updated in almost a year.
I still remember an email exchange in which a guy tried to convince me their blueprint is correct, and “it’s just an issue with Warudo,” even after I pointed out their blueprint logic is incorrect and why. When I asked them to join Discord to discuss further, this is their reply:
A powerful documentary. I didn't expect finding out who wrote the Disney Channel theme would bring me to tears...
Much better than Netflix's disastrous "documentary" (more like conspiracy theories) on MH370. Oh god.
This! Seen a lot of Palworld discussions lately, blaming the devs and players for creating/playing a game filled with AI art - which is unproven and most likely not true (text-to-3D GenAI is bad & unusable right now). Whether it plagiarized Pokemon is a more interesting debate.
Y’all gotta stop getting your “facts” from social media.
Jesus.
There’s not been any definitive proof about this game using AI art, and social media has made up its mind that it has.
This is goofy.
@melfinadarling
Basically put, you need Warudo Pro to render Warudo character mods that are configured with NiloToon by a modeler who owns NiloToon (ex:
@mofuworkshop
@llay11a
). If your modeler has NiloToon, you only need the Warudo Pro license. Otherwise, we can convert your model for you. 🙌🏻
I also can't stress enough that you should not release technical statements without involving your engineering team. "AniLive doesn't use VTS" is as nonsense as "AniLive doesn't consume oxygen."
@roxymanticore
Thank you for your kind support Roxy! 😄 Just to be clear though, I agree with other replies here that many users just find VSF/VNyan more intuitive. Warudo isn't for everyone, but I'm happy it works well for you!
🤩💯 A new
#Warudo
update has just been released! As promised, let's show you how you can add:
1) Live2D arm sway;
2) Natural upper body rotation;
3) Live2D wiggle eyes
to your 3D
#VTuber
avatar. Just like this! 👇
A thread 🧵
@KiraOmori
@AniLive_app
Requiring the PSD file for checking missing assets is more BS. If you have the runtime file, you have all info you need to detect missing assets and prompt the user to upload a correct model.
Alpha stage is not a good excuse for breaching trust and giving non-answers like this.
@KiraOmori
@AniLive_app
I don’t buy this at all, frankly. This is like an app requesting front camera permissions because they want to check the user is a human. There are other non-invasive ways to resize and position a model. (1/2)
No technology in the recent years gave me a “point of no return” until Sora. The future is both exciting and terrifying. The fact that a model trained with enough data can generate a Minecraft gameplay video is insane & makes OpenAI’s AGI claim much more convincing.