3D Scanner

3D Scanner

<- Return to Portfolio

The 3D scanner was inspired by my recent introduction to 3d printing during Junior year. When I ran across this tutorial on printing a 3D scanner frame, I decided to give it a shot.

After assembling the scanner and collecting the images, I have been writing the code to convert the images into point clouds which will, in turn, be used to create the 3D mesh!

Printing and Parts

First Steps

After printing out the files provided from the above mentioned tutorial I had to wait for the lasers and stepper motor to arrive (links to hardware are in the tutorial).

First Steps

The stepper motor arrived first. It fit nicely into the printed parts and wasn't too difficult to get the Raspberry Pi to interface with its driver. This was the first time I had used stepper motors, and I was impressed with how accurate and controllable the rotation was.

After the lasers had arrived, I attached them and set up the circuitry. I was almost ready to scan, except...

Camera Mount

The 3D files for the scanner were designed to hold a web camera and didn't fit the Raspberry Pi camera I wanted to use. Using the brackets from the original file, I removed the original mount and built one that would fit the new camera. On the third iteration, it fit!

New Camera Mount


First Scan

After installing the camera, I wrote a quick script to run the lasers and the motor, then started the first scan! This was the result.

First Scan Code

import sys, os, shutil
import time
import RPi.GPIO as GPIO
import picamera

laserPins = [5,6,13,19]
stepPins = [17,18,21,22]
for pin in laserPins: #initalize the output pins
    GPIO.output(pin, False)
for pin in stepPins:
    GPIO.output(pin, False)

def forward(delay, steps): #move the stepper forward
    for i in range(0, steps):
        setStep(1, 0, 1, 0)
        setStep(0, 1, 1, 0)
        setStep(0, 1, 0, 1)
        setStep(1, 0, 0, 1)

def setStep(w1, w2, w3, w4): #change the output pins, used in other fcns
  GPIO.output(stepPins[0], w1)
  GPIO.output(stepPins[1], w2)
  GPIO.output(stepPins[2], w3)
  GPIO.output(stepPins[3], w4)

delay = 5
steps = 4 #512 #full revolution is 512

camera = picamera.PiCamera() #start the camera
camera.start_preview() #start to enable fast picture taking
    os.chdir('scanPics') #setup output
t1 = time.time()
for i in range(0,127): #for every rotation
    for a in range(0,4): #turn the lasers off
        GPIO.output(laserPins[0], False)
        GPIO.output(laserPins[1], False)
        GPIO.output(laserPins[2], False)
        GPIO.output(laserPins[3], False)
        GPIO.output(laserPins[a], True) #turn one laser on
        camera.capture('{0}-{1}.jpg'.format(i,a)) #take a picture
    forward(int(delay) / 1000.0, int(steps)) #move the table
    print i
camera.stop_preview() #disable preview
t2 = time.time()
print t2-t1 #print runtime
GPIO.output(laserPins[0], False) #turn off stepper
GPIO.output(laserPins[1], False)
GPIO.output(laserPins[2], False)
GPIO.output(laserPins[3], False)


Processing First Steps

The first step in processing the images was to find the laser's location in every row. This makes it easier to find the coordinates of intersection between the object and the laser 'plane' in the future.

To do this, I first had to learn how to hand images in python. Images, when loaded into python, are 3-dimensional arrays. The first array contains each row, each row array contains each pixel, and each pixel contains the RGB values for its coloration. An example of a 2x2px red image is below.

[[[255, 0, 0], [255,0,0]],
  [[255, 0, 0], [255, 0, 0]]]

To find the laser in each image, I iterated over each row and found the pixel with a maximum Red value above a threshold. If such a pixel exists, I append the location of that pixel to an array that will be turned into an image later. The video below shows the result of a directory of processed images. (Early in the video the camera was knocked out of place for ~50 pictures)

You may be able to tell that the scanned object is a cup because the laser reflects off the back of it as it scans higher up.

Code for converting image array to B/W

for i in range(0,len(im[:,:,])): #runs once for every row ------
        if np.max(redLayer[i]) > 20: #finds the max of eaho row, if higher than 20 adds it (threshold)
            hotspot.append(0) #filler
    post = np.zeros((480,720,3), np.uint8)
    for i in range(0,len(im[:,:,])):
        if hotspot[i] == 0:
            post[i,hotspot[i]] = [255, 255, 255]
    yValues = [] #goint to be points from line
    slopes = []
    if len(loc) > 40:
        for l in range(0,40):
            if float(hotspot[loc[-l]]-hotspot[loc[-(l+1)]]) != 0:
                slope = (loc[-l]-loc[-(l+1)])/float(hotspot[loc[-l]]-hotspot[loc[-(l+1)]])
            #loc = y value, row number where there were detected maximums
            #hotspot = x value, location of the pixel (how far across)


        slope = np.average(slopes)
        #print slope
        b = loc[-1]-(slope*hotspot[loc[-1]]) #find b for y=mx+b
        for z in range(0,len(im[1])): #all x values
            yval = slope*z + b
        for t in range(0,len(yValues)):
            if yValues[t] >= 0 and yValues[t] <= 479:
                post[yValues[t],t] = [0, 0, 255]


Next Steps

I am currently working to convert the points I've collected from the processed images into 3D coordinates. I have collected many images from different items and developed programs that will plot the shape of the object based on the laser location at a given height.

I am writing my program based on the ideas in this research paper.