Category: Software, Coding, Programing

All the digital things

Coworking for Computer Vision

Hi, my name is Mark. I’ve been a member of ACE for almost 9 years. There’s been three things on my To-Do list gnawing at my psyche for some time:

  1. Learn about Raspberry Pi microprocessors through Internet of Things (IoT) applications.
  2. Get hands-on experience with Artificial Intelligence.
  3. Learn the popular Python programming language.

Why these? Because computers are getting smaller while getting more powerful; Artificial Intelligence (AI) is running on ever smaller computers; and Python is a versatile, beginner-friendly language that’s well-documented and used for both Raspberry Pi (RPi) and AI projects.

I’ve been working in computer vision, a field of AI, for several years in both business development and business operations capacities. While I don’t have a technical background, I strive to understand how the engineering of products & services of my employers works in order to facilitate communication with clients. Throughout my career I’ve asked a lot of engineers a lot of naive questions because I’m curious about how the underlying technologies come together on a fundamental level. I owe a big thanks to those engineers for their patience with me! It was time for me to learn it by doing it on my own.

Computer Vision gives machines the ability to see the world as humans do – Using methods for acquiring, processing, analyzing, and understanding digital images or spatial information.


In starting on my learning journey I began a routine of studying at our ACE Makerspace coworking space every week to be around other makers. This helped me maintain focus after the a pandemic induced a work-from-home lifestyle that left me inhibited by a serious brain fog.

My work environment at ACE Coworking

OpenCV (Open Source Computer Vision Library) is a cross-platform library of programming functions mainly aimed at real-time computer vision. AMONG MANY COMPONENTS It includes a machine learning library as a set of functions for statistical classification, regression, and clustering of data.

Fun Fact: Our ACE Makerspace Edgy Cam Photobooth seen at many ACE events uses an ‘Edge Detection’ technique also from the OpenCV Library.

A self-paced Intro to Python course came first. Then came a course on OpenCV which taught the fundamentals of image processing. Later still came tutorials on how to train a computer to recognize objects, and even faces, from a series of images.

Plotting the distribution of color intensities in the red, green, and blue color channels

 

3D scatter plot of distributions of grouped colors in images

 

A binary mask to obtain hand gesture shape, to be trained for gesture recognition

 

Notice the difference in probabilities associated with the face recognition predictions when the face is partially occluded by face mask

Eventually, I moved onto more complex projects, including programming an autonomous mini robot car that responds to commands based on what the AI algorithm infers from an attached camera’s video feed – This was real-time computer vision! There were many starter robot car kits to choose from. Some are for educational purposes, others come pre-assembled with a chassis, motor controllers, sensors, and even software. Surely, this was the best path for me to get straight into the software and image processing. But the pandemic had bogged down supply chains, and it seemed that any product with a microchip was on backorder for months.

A backlog of cargo ships waiting outside west coast ports as a symbol of supply chain issues

I couldn’t find a starter robot car kit for sale online that shipped within 60 days and I wasn’t willing to wait that long. And I didn’t want to skip this tutorial because it was a great exercise combining the RPi, AI, and Python programming triad. ACE Makerspace facilities came to the rescue again with the electronics stations and 3D printers which opened up my options.

I learned a few things working at computer vision hardware companies: Sometimes compromises are made in hardware due to availability of components; Sometimes compromises are made in the software due to the lack of time. One thing was for sure, I had to decide on an alternative hardware solution because hardware supply was the limiting factor. On the other hand, software was rather easy to modify to work with various motor controllers. 

So after some research I decided on making my own robot car kit using the JetBot reference design. The JetBot is an open-source robot based on the Nvidia Jetson Nano, another single board computer more powerful than the RPi. Would this design work with the RPi? I ordered the components and shifted focus to 3D printing the car chassis and mounts while waiting for components from Adafruit and Amazon to arrive. ACE has (2) Prusa 3D printers on which I could run print jobs in parallel;



When the parts arrived I switched over to assembling and soldering (and in my case, de-soldering and re-soldering) the electronic components using ACE’s electronics stations equipped with many of the hand tools, soldering materials, and miscellaneous electrical components. When fully assembled, swapping in the Raspberry Pi for the Jetson Nano computer was simple and it booted up and operated as described on the JetBot site.

Soldering
It’s ALIVE! with an IP address that I use to connect remotely

The autonomous robot car starts by roaming around at a constant speed in a single direction. The Raspberry Pi drives the motor controls, operates the attached camera, and marshals the camera frames to the attached blue coprocessor, an Intel Neural Compute Stick (NCS), plugged into and powered by the Raspberry Pi USB 3.0 port. It’s this NCS that is “looking” for a type of object in each camera frame. The NCS is a coprocessor dedicated to the application-specific task of object detection using a pre-installed program called a MobileNet SSD – pre-trained to recognize a list of common objects. I chose the object type ‘bottle’.

“MobileNet” because they are designed for resource constrained devices such as your smartphone.  “SSD” stands for “Single-shot Detector” because object localization and classification are done in a single forward pass of the neural network. In general, single-stage detectors tend to be less accurate than two-stage detectors, but are significantly faster.

The Neural Compute Stick’s processor is designed to perform the AI inference – accurately detecting and correctly classifying a ‘bottle’ in the camera frame. The NCS localizes the bottle within the camera frame and determines the bounding box coordinates of where in the frame the object is located. The NCS then sends these coordinates to the RPi; The RPi reads these coordinates, determines the center of the bounding box and whether that single center point is to the Left or Right of the center of the RPi’s camera frame.

Knowing this, the RPi will steer the robot accordingly by sending separate commands to the motor controller that drives the two wheels:

  • If that Center Point is Left of Center, then the motor controller will slow down the left wheel and speed up the right wheel;
  • If that Center Point is Right of Center, then the motor controller will slow down the right wheel and speed up the left wheel;

Keeping the bottle in the center of the frame, the RPi drives the car towards the bottle. In the lower-right corner of the video below is a picture-in-picture video from the camera on the Raspberry Pi. A ‘bottle’ is correctly detected and classified in the camera frames. The software [mostly] steers the car towards the bottle.

Older USB Accelerators, such as the NCS (v1), can be slow and cause latency in the reaction time of the computer. So there’s a latency in executing motor control commands. (Not a big deal for a tabletop autonomous mini-car application, but it is a BIG deal for autonomous cars being tested in the real world on the roads today.) On the other hand, this would be difficult to perform on the RPi alone, without a coprocessor, because the Intel NCS is engineered to perform the application-specific number-crunching more efficiently and while using less power than the CPU on the Raspberry Pi.

Finally, I couldn’t help but think that there was some irony in this supply chain dilemma that I had experienced while waiting for electronics to help me learn about robots; Because maybe employing more robots in factories will be how U.S. manufacturers improve resilience of supply chains if these companies decide to “onshore” or “reshore” production back onto home turf. Just my opinion.

Since finishing this robot mini-car I’ve moved on to learn other AI frameworks and even training AI with data in the cloud. My next challenge might be to add a 3D depth sensor to the robot car and map the room in 3D while applying AI to the depth data. A little while back I picked up a used Neato XV-11 robot vacuum from an ACE member, and I might start exploring that device for its LIDAR sensor instead.

Let me know if you’re interested in learning about AI or microprocessors, or if you’re working on similar projects. Until then, I’ll see you around ACE!

Mark Piszczor
LinkedIn

Made at AMT-June 2019

NOMCOM Fob All The Things dashboard | AMT Software • Bodie/Crafty
Hand Built Speaker | Workshop • David
Recycling Game | Workshop/Laser • Bernard M.
Solid wood credenza | Workshop | Raj J.
Tiny electronic brass jewelry | Electronics | Ray A.
RFID Mint Dispensing Box | Laser+Electronics | Crafty
Wood Signage | CNC Router | James L.
Fabric Kraken stuffed with 720 LEDs | Textiles + Electronics | Crafty

Designing a replacement tool grip in Fusion 360

This is what our filament nippers looked like in 3D printing.

The work fine, but one of the rubber grips has almost split in two.

A few weeks ago, Evan made a valiant effort at saving them:

But, alas, the patch quickly broke off.

It’s a great excuse for another Fusion 360 3D printing article!

I’ll make the replacement in PLA. It won’t be squishy like the original, but it’ll be more comfortable than the bare metal.

To model it, I took a photo; then used Fusion’s ‘attached canvas’ feature. The easy way to use this feature is to simply import the image, without entering any dimensions at all. Then, right click the attached-canvas object in the browser and select calibrate. Fusion will prompt you to select two points. I’ve chosen the little hole near the joint and the end of the tang, which measures 98.6mm

Now we can make a sketch of the profile. I fitted arcs to the shape as near as I can. I find this easier than using splines when the shape allows for it. I used the ‘Fix’ tool instead of dimensions, since the scaled photo is what really defines the size here. I did not bother modeling the business end of the tool.

Next I extruded this profile to a 2mm thickness.

This was done in a component called tang. Next I created a new component called grip and sketched the outer profile. I projected the tang outline first; then offset the lower and sketched the upper end to eyeball-match the existing grip.

This was extruded ‘downward’ to create the basic shape of the lower half of the grip.

Next, I sketched a profile and cut away a depression for the inner part. This profile was offset from the tang outline very slightly (0.2mm) to allow for a reasonable fit. In this case, I may have to adjust the dimensions for fit a few times anyway, so this step could probably be omitted.  Still, I think it’s good practice to explicitly design appropriate fit clearance for mating parts.

A chamfer on the bottom completes the grip. It’s not an exact match but it’s close enough.

Finally, I mirror the body to make the top half of the grip. I’ll print in two pieces and glue them together to avoid using support material.

When I don’t know for sure that I have the size of something right, I often print an ‘abbreviated’ version to test the fit. This part’s small enough that I probably don’t need to, but just to illustrate the step, here’s what I do. Use the box tool, with the intersect operation. Drag the box until it surrounds the area of interest. Precise dimensions are not necessary here; we’re just isolating the feature to be tested.

In this case, I’ve simply shaved off the bottom few millimeters. I can cancel the print after just a few layers and see how well it fits the handle.

Once I’m done testing, I can simply disable (or delete) the box feature in the history timeline.

Let’s print it and see what we’ve got!

Hm… not quite. The inner curve seems right, but the outer is too tight. I’ll tweak the first sketch and try again.

This one’s still not perfect, but I think it’s close enough. Here are the complete parts, fresh off the printer.

The fit is okay but there are a few minor issues: The parts warped very slightly when printed, and the cavity for the tang was just a hair too shallow.

A bit of glue and clamping would probably have solved the problem but I had to knock off for the day anyway, and took a bit of time the next day to reprint at my own shop. I even had some blue filament that’s a closer match to the original grip.

Here it is, glued and clamped up. I gave the mating faces a light sanding to help the glue stick better. I used thick, gel-style cyanoacrylate glue, which gives a few seconds to line things up before it grabs. It seems to work very well with PLA.

And here’s the result. Let’s hope it lasts longer than the original!

But wait… Has this all been worth it?

Well, probably not. I found brand-new nippers from a US vendor for $3.09 on eBay. They’re even cheaper if you order directly from the Far East.

Oh well.  I think the techniques are worthwhile to know. The main thing is that it made for a good blog post!

 

AMT’s Adventures at Maker Faire 2018

The Art Printing Photobooth aka The Edgy Printacular

At the Bay Area Maker Faire 2018, a team of Ace Monster Toys members created a photobooth where participants could take selfies which were then transformed into line art versions and printed, all initiated by pressing one ‘too-big-to-believe’ red button.

Back in March, AMT folks began prepping for Maker Faire 2018, and had an idea: what if you made a machine that could take a selfie and then generate a line art version of the said selfie, that could then be printed out for participants like you and me?! Thus, the Art Printing Photobooth was born! This project was based on the Edgy Cam project by Ray Alderman. AMT created a special slack channel just for Bay Area Maker Faire 2018 #maker-faire-2018. Then members set about figuring out how exactly to make this art-generating-automaton and Rachel (Crafty) campaigned for having a ‘too-big-to-believe’ push button. They would need many maker skills: CNC routing and file design, woodworking, electronics wiring, and someone to art it all up on the physical piece itself. Bob (Damp Rabbit) quickly volunteered to take on the design and CNC cutting, while Ray (whamodyne) started to chip away at the code that would be used to convert photos to line art.


Then the trouble began. By mid-April, our intrepid troubleshooters were running into all sorts of snags – so much so that the original code needed to be thrown out and rewritten from the ground up! To add additional difficulty (and awesomeness!) the team decided to use a Print on Demand(POD) service to allow participants to have their generated art uploaded and available to be printed on mugs, t-shirts, posters, etc. Soon after, Ray wrote up a new digispark code for the big-red-button to actuate the script and convert and print the line art (code given below) using Python3, opencv library, printer library from https://github.com/python-escpos/python-escpos.


Meanwhile, Crafty Rachel and Bernard were configuring the TV mount that would be the selfie-display of the photobooth and Damp Rabbit was busy CNCing and painting up a storm to create the beautiful finished product – The Edgy Printacular! The EP was a hit and won three blue ribbons at Maker Faire 2018. Another happy ending that speaks to what a few creative makers can do when they put their heads together in a place with all the right equipment, Ace Monster Toys <3

Big empty room

AMT Expansion 2018

This month AMT turns 8 years old and we are growing! We have rented an additional 1200sqft suite in the building. We have a Work Party Weekend planned June 1-3 to upgrade and reconfigure all of AMT. All the key areas at AMT are getting an upgrade :

CoWorking and Classroom are moving in to the new suite. Rad wifi, chill space away from the big machines, and core office amenities are planned for CoWorking. The new Classroom will be reconfigurable and have double the capacity.

Textiles is moving upstairs into the light. The room will now be a clean fabrication hub with Electronics and 3D Printing both expanding into the space made available. Photo printing may or may not stay upstairs — plans are still forming up.

Metal working, bike parking, and new storage including the old lockers will be moving into the old classroom. But before they move in the room is getting a face lift by returning to the cement floors and the walls will get a new coat of paint.

The CNC room and workshop will then be reconfigured to take advantage of the space Metal vacated. We aren’t sure what that is going to look like beyond more workspace and possibly affordable storage for larger short term projects.

Town Hall Meeting May 17th • 7:30PM • Plan the New Space

What expansion means to membership

The other thing that happened in May is after 8 years our rent finally went up. It is still affordable enough that we get to expand. Expansion also means increasing membership volume to cover the new rents and to take advantage of all the upgrades. We are looking to add another 30 members by winter.  Our total capacity before we hit the cap will be 200 members. We feel that offering more classes and the best bargain in co-working will allow us to do this. Please help get the word out!

The New Suite in the Raw

Big empty room

Fusion 360 Hangout Notes

We had a great session last night (2-12-18) at the Fusion 360 hangout.

  • I burned most of the time presenting the design discussed in my recent blog post on best practices. I fielded lots of questions and expanded on some of the points in that post, so everyone seemed to get something out of it.
  • Chris has been struggling with sketches that began life as imported DXF files. Lots of funny duplicated lines in the skeches we looked at. We kicked around a few ideas for him to try, but nobody had the magic answers.
  • Steve has been playing with Fusion’s Drawing feature & had some neat things to show.
  • Bob showed us some of his progress carving Guitar parts. This is complex CAM stuff involving multiple operations and remounting parts to carve two sides. Can’t wait to see the progress.

A ‘pair-programming-style’ hangout was proposed for a future session. I think it’s a GREAT idea… We work together in pairs, sharing experience and generally bouncing ideas off each other while working through real member projects.

This kind of meeting can be run by anyone… and I’m looking for volunteers. I think a group meeting would be a lot of fun… that way we could negotiate which projects we might be able to help most with, or are most interested in. …but it doesn’t _have_ to be a group meeting. If nothing else, feel free to pipe-up in this forum anytime you get stuck and think an extra set of eyes would help. And _do_ make yourself available to others: I’ve learned a great deal about Fusion through other folks’ projects, since they so often approach the tasks in a way that would never occur to me.

By popular request, I’m going to put together a more traditional class for next time, focusing on beginners. The hands-on format was overwhelmingly preferred to anything else we’ve tried, so we’ll go with that. No schedule yet; watch this space!

The Vorpal Combat Hexapod

I demonstrated this fun robot at the last BoxBots build night and our general meeting last Thursday. Since then a few folks have asked questions so I thought I would post more detail.

The Vorpal Combat Hexapod is the subject of a Kickstarter campaign I discovered a few weeks ago. I was impressed and decided to back the project. I had a few questions so I contacted the designer, Steve Pendergrast. Then I had a few suggestions and before long we had a rich correspondence. I spent quite a bit more time than I’d expected to, offering thoughts for his wiki, design suggestions, etc.

Steve appreciated my feedback and offered to send me a completed robot if I would promise to demonstrate it for our membership. The robot you see in the photos was made by Steve, not me. Mine will be forthcoming!

You can read the official description on the Kickstarter page and project wiki. Here are my own thoughts and a few of the reasons I like the project so much.

It’s cool!

It has to be to get the kids interested; something that Ray has always understood with BoxBots. While BoxBots offers the thrill of destructive combat, the hexapod offers spidery, insect-ish, crawly coolness with interactive games and programming challenges.

It’s a fun toy

Straight away, this robot offers lot of play value. There are four walk modes, four dance modes, four fight modes, and a built-in record/playback function. To get them interested in the advanced possibilities, you have to get them hooked first. Don’t be intimidated by that array of buttons. At the Boxbots build night, the kids all picked it up very quickly. I couldn’t get the controller out of their hands.

It’s open-source

The circuitry, firmware, and plastic parts are already published. A lot of crowd-funded projects promise release only after funding, and some only publish the STL files, which can be very difficult to edit. Steve has provided the full CAD source (designed in OnShape).

Easy to Accessorize

The Joust and Capture-the-flag games use special accessories that fasten to a standard mount on the robot’s nose. This simplifies add-on design since there’s no need to modify the robot frame. There are also magnets around the perimeter, encouraging fun cosmetic add-ons like eyes and nametags.

Off-the-shelf electronic components

There are no custom circuit boards here. It’s built with two Arduino Nano boards, two Bluetooth boards, a servo controller, buzzer, pot, micro-SD adapter, two pushbutton boards, inexpensive servos, etc. This stuff is all available online if you want to source your own parts. If you’re an Arduino geek, it will all look familiar.

No Soldering!

I think every kid should learn how to use a soldering iron in school, but for some it remains an intimidating barrier. In the hexapod, everything’s connected with push-on jumper wires. (If you source your own parts you will probably have to solder the battery case and switches, since these seldom have matching connectors.)

Scratch programming interface

The controller and robot firmware is written in Arduino’s C-like language, but the robot also supports a beginner-friendly drag-and-drop programming interface built with MIT’s Scratch system. I confess, I haven’t investigated this feature yet, but I’ve been curious about drag-and-drop programming paradigms for years. My first programs were stored on punched cards. Finally, I have an opportunity to see how today’s cool kids learn programming!

It’s 3D printed

The parts print without support, and work fine at low-resolution. You’ll want to get your own spool of filament so you have the color available for replacement parts. Any of our printers will work. I’ve had good luck so far with PLA, but Steve recommends more flexible materials like PETG or ABS.

Anyway, enough gushing. I do not have any financial interest in the project. I just like to encourage a good idea when I see one. The Kickstarter campaign just reached its goal a few days ago, so it’s definitely going to be funded. If you’d like to back the Kickstarter or learn more, here’s the link. You’ll have to act fast; there are only a few days left. (Full disclosure: I do get referral perks if you use this link.) Remember that you always assume some risk with crowd-funding. I’ll make no guarantees, but I’m satisfied that Steve is serious about the project and is no scammer.

Click here for the Hexapod Kickstarter campaign.

If you’d like to see this robot in person, contact me on Slack. I’ll try to arrange a demo.

-Matt

A note on Fusion 360 for the big CNC

The gcode emitted by Fusion 360 using the default settings does not work on our big CNC. Rama figured out that manually editing the gcode and removing the first six lines gets around the issue.

I was curious about this and decided to investigate. I reverse-engineered the codes in the preamble, but all seemed to be perfectly valid Mach 3 g-code. Finally, I found the culprit: G28.

g28screenshot

It turns out that there’s a simple solution: Click post process to create the gcode.  Then open the Properties pane and un-check useG28. This option also controls some related codes at the end of the file.

g28codeshot

I do not recommend deleting the entire six-line preamble! It sets up various values in Mach 3’s brain, and omitting them may be give unexpected results. It sets units to Metric or Imperial, for example. If omitted, your job might be unexpectedly scaled to a weird size.

That’s all you really need to know! Read on if you’re interested in the details.

The issue is covered in this article:

http://discuss.inventables.com/t/learning-about-g28/12205

Briefly, G28 is used to return the cutter-head to the home position. If your CNC machine has end-stop switches, Mach 3 can be configured to move to the physical limits of its travel, which is often a convenient parking place for the cutter-head at the end of the job. It also resets Mach 3’s zero position in case you have some kind of permanent workpiece mounting arrangement that always positions the workpiece in the same place.

We don’t use the big CNC this way. Instead, we mount workpieces in a variety of ways and manually set the zero position before each the job. The article above makes a case for implementing G28, but I don’t think it’s applicable for us.

I figured this out by digging into the code. It turns out that the tool-path is converted to gcode by a nicely commented Javascript program. Search your system for ‘mach3mill.cps’ It will be buried down in the bowels of your application tree somewhere, and is probably in a different place for PCs vs. Macs. I looked for the G28 code, found it was controlled by an option, then finally googled for that option to locate the above post. Anyway, it’s good to know that we have flexibility if we need to further customize gcode generation.