Making Blender tools outside of Blender (part 1)

Two years ago, when I started working with Blender coming from Maya, I was very frustrated by Blender’s GUI limitations for TDs. The options for creating tools looked too limiting, and still are as far as I’m concerned.

Don’t get me wrong, there are great designed features for TDs. Creating an operator is easy and then you can use it everywhere you need, the API is strong and I like it. My complaint is more about windows and widgets. And I had to find solutions.

Windows and widgets

It’s simple: Blender doesn’t want us to create windows, and relies on embedded frames in the layout. Why? Because Blender was smartly designed for a single user using a single and small screen. Which is great if you work on a laptop, for example. But it is not the setup you find when dealing with a professional or a studio. You often have 2 big screens, so there is room for multiple windows. My point is: let the user decide what should be in a separated window and what shouldn’t. Because creating complex (often needed) interfaces stuck in the left menu, or in a very limited popup windows is complicated. And for the record: I am not saying maya is an example, it’s not! (having windows outside of the screen is bad, Autodesk, and you know that since day one and it’s still not fixed).

On to the topic of widgets. Blender comes with the most often used widgets you could need to create a tool: buttons, text fields, digit fields, list (but mainly one row) and groups. But that’s pretty much it. If you need something else: code it. In my opinion, it is not easy for an average, and in a hurry (as we are always in production) TD. You have to code it in openGL, through Python. It’s clearly not the fastest and easiest approach. If only Blender did what everyone consented to do, leaving the option to the user: include PyQT or PySide. Allow the user to do what he wants. But no… and they don’t want to hear about that.
For example Francois Grassard (whom I’ll talk about later) and I asked one of the main dev of the Blender interface: could we include at least a webpage widget so we can call a page, and talk with Blender with a JavaScript API, and do whatever we want inside. He just answered, “I don’t see the point. No need”. No need? You may have no need. But I think a basic principle on how others might use software is : we have no idea !

In my opinion, here is a core issue of the Blender community and devs. The way sometimes they make you feel they know all the ways of using the software. Without any doubt, they know a lot of ways and they are the pillars of one of the best free communities existing. But they cannot know all the ways.
I’ve witnessed uses of software, frameworks, APIs, in ways there is no way the developers could have imagined. You might have as many ways of using and working with a software or creating a workflow as there are users using it. Not all of them will work the way you do. Trying to impose a way, is not the right answer. The software will be used if it fits the demand. The demand here is pretty simple: leave us room to create interfaces you have not thought of and you can’t imagine because you can’t handle all use cases. You don’t often have 20-50 people (or more!) working on the same project to push the limits, or the experience of a 400+ users production.
And it’s not only about widgets, there are missing interaction features, like advanced mouse control, which are non-existing or hard to get in Blender: middle-click drag, double-click vs simple click (on a button for example), right-click with specific menu or design on a button, etc.

I think it’s really important to admit that none of us can have a precise idea of the needs to come, and it’s important not to impose one vision. The more Blender will be used, the more that need, widely present in Autodesk’s or The Foundry’s apps, will be required. Even Adobe has left the option to create web interfaces within their software to extend and create custom interfaces.

Web apps

It’s summer 2015 and I have that growing frustration. I asked myself “Can we do Blender tools… outside of Blender?”. How can I get a third-party app, which could be a QT app, or a command line script or a webpage, communicate with Blender?
A web app will be just great as you can build very easily dynamic pages, with an enormous number of widgets (you can include any web graphical framework or JavaScript library), keyboard shortcuts, advanced mouse control, etc.
I presented that idea to François “Coyhot” Grassard (, one of the most active French Blender users. He offered me a piece of code he and a mate, Jonathan Giroux (@koltesdigital), created for demoscene events, where they needed a way to push actions to Blender in real time, using simple websockets.

That was extremely interesting. The main reason is that websockets are a standard protocol. Most languages can use them, so it’s not limited to Python. Javascript handles it by default in most browsers. And websockets are very light to handle. And, of course, it’s cross-platforms.

Main concept

To use websockets, we need something expecting to receive the call from that third party app. It’s a simple and small webserver, which, in our case, is ws4python (WebSockets for python).
But for executions reasons within the Blender main loop, the server had to run in a subprocess to handle it’s own loop. To communicate with Blender, we created a queue, shared by Blender and the webserver. Let’s say it’s like a mailbox and the webserver is the postman outside Blender’s garden putting the message into the mailbox. So when the server gets a message, it adds it to the queue. In Blender, an operator, called at each loop, is checking the queue. If there is something, it takes the message and removes it from the queue.
The message is a json string, containing useful information: who is asking (by default only localhost can get through), which operator is demanded, and what are the provided parameters. If the operator exists, it’s called with the provided options. If the operator has a callback option, a message ID is provided so we’ll be able to answer back later.

The first version was pretty raw, allowing to execute any python command. So I created an operator, to use it as a module. Allowing the operator to use custom ports, but also to stop the execution of the server. And above all, for security reasons, I limited the execution to existing operators (and not any python command), adding a basic communication operator and implementing the possibility for an operator to give feedback to the message. So instead of pushing information from outside, you could get answers.

From here, we had a way to push, let’s say in real time (it’s super fast 1), messages to Blender, from any outside sources capable of using websockets. It was time to build something.



The first use was a prototype for a pose library. Such functionality allows the user to save, poses in a shared location and not limited to one rig. It may be the full pose of a character, or just a part of it (the hands, the eyes,…). So we can re-use them later, on the same character or in any character using the same animation rig structure.
There is a similar feature included in Blender, but it’s (or it was) limited to the full rig, and not part of it, and it’s included within the actions of a provided rig, hard to share and maintain. By the way, my friend Jasper van Nieuwenhuizen did a nice update to the default tool with its add on pose thumbnails

So I built a small webpage and a Blender add-on adding new operators to communicate with that webpage. In Blender you select the rig you want to apply your pose to and set the timeline right. On the webpage, sorted by character or pose type (there is a little folder system), you could look through the pose you want to use. Double click and the pose is instantly applied to the selected rig in Blender.
But then you could also middle-click drag, not to apply the pose itself, but a percentage of the selected pose with the current pose in Blender. So you could mix poses. Check out the video at the end of the article to get the idea.
And then, within Blender, if you had selected just the leg controllers, the pose will apply only to that selection. Making it even easier to mix poses.


The good point of working that way, is I can use any existing web feature or widget. So options are super wide. And the webpage could easily be shared with a group of users within a local network or the Internet (for teams working decentralized). But, as I say, you could use QT, or even another Blender to communicate with your Blender. Could be through a network, so we can imagine 2 users on 2 Blenders, real time sharing information for set dressing or whatever other task you can imagine.



That (unfinished) prototype, available on our github, had several limitations including :

  • If blender crashes, the server might not be shutdown, blocking the port use when launched. Need a better way to shutdown, or something to check if blender is still alive.
  • Not yet autosearch of available ports, and the JavaScript is not looking for other ports than the default one
  • I need to fix the previous point to allow users to have multiple blenders and chose which one will get the message from the browser
  • The window focus, on some operative systems and when coming back from the browser, might force you to click on blender to set the focus back (annoying useless click)
  • That pose library tool is not working on characters with multiple armatures. Or multiple characters (like a horseman on a horse)
  • I want to create a better server add-on that allows white list ips (and not only the local so you might send commands from a trust source (might be useful in a studio).
  • The server in the demo is included in the colibri package. But should be a complete standalone solution.



With that prototype, we had the chance to share a 25 minutes talk with Francois at the Blender Conference 2015. You can watch it online on youtube. You will be able to see a live demo of what I’m talking about and also other examples of connections that Francois built.


That’s it for now. In an upcoming article, I’ll go deeper on how you build the tools and connection with the current available script.


  1. It is executed at each Blender main loop, something like 60 times per second. But for those who seek something even faster, and we have we met some, it might not be the solution yet… 
Show CommentsClose Comments

Leave a comment