Bpycv: Computer Vision and Deep Learning Utils for Blender

Mar 31, 2024 | Data Science

Welcome to the transformative world of Bpycv – where computer vision intertwines with deep learning within Blender! In this blog, I’ll guide you through the remarkable features, installation steps, and the exciting demos that Bpycv has to offer. Ready to dive in? Let’s get started!

Features

Bpycv comes packed with a plethora of features aimed at enhancing your Blender projects. Here’s what you can expect:

  • Render annotations for semantic segmentation, instance segmentation, and panoptic segmentation.
  • Generate 6DoF pose ground truth for enhanced spatial awareness.
  • Render depth ground truth for more accurate depth perception.
  • Enjoy pre-defined domain randomization, including:
  • Easy installation and demo running for swift project initiation.
  • Support for Docker to ease your setup process.
  • A versatile Python codebase for building synthetic datasets.
  • Conversion support to Cityscapes annotation format.
  • Simplicity in development and debugging due to no complicated packaging and direct use of Blender’s native API.

Installation

To begin utilizing Bpycv, make sure you have Blender version 2.9 or later. Here’s a step-by-step guide:

  1. Download and install Blender here.
  2. Open Blender directory in terminal and run the installation script:
  3. For Windows users:
    bpycv installation commands

Demo

To showcase the functionality of Bpycv, let’s run some demos!

1. Fast Instance Segmentation and Depth Demo

Copy-paste this Python code into the Scripting Text Editor and hit the Run Script button (or press Alt+P):

pythonimport cv2
import bpy
import bpycv
import random
import numpy as np

# remove all MESH objects
[bpy.data.objects.remove(obj) for obj in bpy.data.objects if obj.type == MESH]

for index in range(1, 20):
    location = [random.uniform(-2, 2) for _ in range(3)]
    if index % 2:
        bpy.ops.mesh.primitive_cube_add(size=0.5, location=location)
        categories_id = 1
    else:
        bpy.ops.mesh.primitive_uv_sphere_add(radius=0.5, location=location)
        categories_id = 2
    
    obj = bpy.context.active_object
    obj[inst_id] = categories_id * 1000 + index

# render image, instance annotation and depth in one line code
result = bpycv.render_data()

# save result
cv2.imwrite('demo-rgb.jpg', result[image][..., ::-1])  # RGB image
cv2.imwrite('demo-inst.png', np.uint16(result[inst]))  # instance map
depth_in_mm = result[depth] * 1000
cv2.imwrite('demo-depth.png', np.uint16(depth_in_mm))  # depth
cv2.imwrite('demo-vis(inst_rgb_depth).jpg', result.vis()[..., ::-1])  # visualization

In this code, we are essentially generating a number of objects (cubes and spheres) in random positions, akin to a lively art installation where artists throw random shapes across a canvas. Each object’s attributes are designed to allow Bpycv to recognize and annotate their location and type.

2. YCB Demo

Create a new directory and clone the necessary resources:

shell
mkdir ycb_demo
cd ycb_demo
git clone https://github.comDIYer22bpycv.git
git clone https://github.comDIYer22bpycv_example.git
cd bpycv_example
blender -b -P ycb_demo.py

3. 6DoF Pose Demo

Generate and visualize 6DoF pose ground truth using the provided example example6d_pose_demo.py.

Tips

Keep in mind that Blender may struggle to directly load .obj or .dea files from the YCB and ShapeNet datasets. For best results, use meshlabserver to convert and format your files. Run:

meshlabserver -i raw.obj -o for_blender.obj -m wtbrbr

Troubleshooting

If you’re facing issues during installation or while running the scripts, consider the following:

  • Verify that your Blender path is set correctly in your environment variables.
  • Make sure you have the right permissions to run scripts and access files. For Windows users, ensure PowerShell runs with administrator permissions.
  • If any command fails, check the console for specific error messages that can guide you to the solution.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox