Playing with Google Apps Scripts

OCRcoverGoogle Apps Scripts (GAS) is an interesting tool that you can use to automate tasks using functions built into Google’s powerful APIs.  Personally, I couldn’t find a ton of info on the syntax and what-not for the language, but it seems to be javascript with some caveats. (It doesn’t like linebreaks in the code, you use only a single quote for Strings, and it will ignore newline characters in strings “\n”).  Oddly though you can’t use the javascript examples from Google’s developer API examples directly (or at least I didn’t have any luck).

As an example project, something I’m working on required me to read the text out of an image then highlight keywords. I broke this down into three steps.

  1. Acquire an image
  2. Use Optical Character Recognition (OCR) to read text
  3. Search and highlight keywords

Background:

I’ll start with giving you access to the finished project code. To run it,  you will have to “File–>Make A Copy” of the project. Then follow the steps on this webpage word-for-word to Activate the DriveAPI and developers console and you’ll be fine.  Sometimes the developers console site doesn’t want to load, just keep trying it until it works. When you want to run the code, you click on “Publish–>Deploy as web app”. The first time you’ll have to set who you will allow to have access to the app. They will also have access to your google drive if you allow them to execute as your account so be careful. Once you’ve set this, click “Deploy”.  Then you’ll get the link to the web app.  The first time you run it you will have to give it access rights to your google drive.

The Guts and Explanation:

OcrTestForm

Luckily while searching, I stumbled across the blog of Amit Agarwal which if full of great example code.  For step one, the simplest solution I could find was his code example that uses a custom HTML form upload an image to your google drive account.   This might seem like overkill. Why didn’t I just use the fileID of something already in my google Drive?   Well, in this particular application, the user might be uploading multiple files from a camera on a mobile device. This form setup seemed to be a simple web-based uploader which gives me complete access to the guts of what’s going on.  This will help me get the fileIDs easily for different files.

For the second step, the OCR, I had attempted multiple times and methods for getting an Android native OCR library to work. Every tutorial I found was highly dependant on the version of Android SDK or Eclipse I had, or I would just hit a dead end when I tried to compile. Plus, those would only work on Android, not every device. I really wanted a web app. That’s actually how I stumbled upon Google Apps Scripts again.  I had played with it a bit in the past, but this time, after finding some great example OCR code from Amit’s website, I found GAS much more accessible. I added this code to the HTML form example from Step 1 above and tweaked it a bit.

In the original OCR code snippet the script reads an image, then creates a Google Doc with the image followed by the text that it recognized. This file is saved this into the root folder of google drive.  Since the uploader code (from Step 1) saved its picture into a subfolder, I made the OCR code save into that same folder.  The way to do this is to add a “parents” ID tag to the properties of the OCR file.  Since I already had the folderID from the uploader code, that was pretty easy to sort out. This was not super easy to figure out. As I said, I couldn’t find a lot of info on GAS language and this was one of the things that took me a while to find. By looking at how Google’s other services save a file into a subfolder, I was able to do the same thing in GAS.  You can see the results below.

OceTestDriveFolder

I tweaked the form code to print out the URL of the newly created Doc file with the searchable text.  I couldn’t figure out how to get logging to work in GAS at all.  Every time I tried to run something that should print to the console, a console would briefly appear, but disappear before I could read anything. So I just stuck with having the form print the URLs. Then I could copy and paste them into the address bar to visit the document.

The third step was searching for keywords within the text.  Again, someone else had done the hard work for me.  I simply tweaked the code and pasted it as a function into my script.

To test the code I printed a page of public domain POE-etry and took a picture of the page with my cellphone to simulate how a user might do.  To make sure that I’d get a lot of hits on my text search, I hardcoded the keywords for the search to be the rhyming sounds in the poem.  In this case –oor and –ore.  Then I uploaded the image to my Google Drive using the HTML form. After a couple of seconds (it takes a couple of seconds to upload) all I needed to do was open the newly created OCR DOC file and see how well the OCR worked.

As I said before, the Google Drive’s OCR process pasts the image into the file, then translates the image to text. Even in normal room lighting at night time it was really good at translating the text and easily searchedand highlighted the keywords I hardcoded. Here’s the resulting file.

Guiding Telescope with a Webcam Setup

I’ve finally gotten jealous enough for the astrophotography subreddit to get back to work on this project. Jess bought me a Meade LX10 8″ diameter telescope several years ago for my birthday. I’ve used it quite a bit to view planets and try to take deep sky astrophotography pictures. This telescope isn’t one of those fancy ones you can type in whatever cool thing you want to see and it’ll drive itself to point right to it, that’s called a “GO TO”. Rather it has a simple “barn door” tracker motor. Basically, if you align to perfect true north, and set the wedge (the thing that mounts the telescope to the tripod) to your latitude, whatever you point the scope it will stay in view for hours in the eyepiece. If I know where to look, I can attach a camera to the scope and leave the shutter open and get some amazing pictures of nebulae and galaxies.

Being that I’m no good at polar alignment, I decided a few years ago to build an arduino interface that will connect my scope to my computer.  The way this works is that I attach a webcam to the spotter scope (the small telescope that helps you find stuff) which looks at a particular star.  The webcam pipe data into a program that sends signals out to the arduino to move the scope to keep the star in the same part of the webcam’s view.  This way, I don’t have to be perfectly polar aligned, the software will help adjust the position of the scope for me.

I went on the hunt for a webcam that would work well with Windows and linux.  This is because a lot of people are buying Raspberry Pi boards,connecting a webcam to them and attaching the whole setup to the telescope. Right now I’m testing on a windows machine so I need a webcam that’ll play well with both.  I looked up the Linux Universal video Class (UVC) drive list to find a good modern camera. This list shows a good number of webcam models and brands that are known to work natively in recent linux distros.

The camera I landed on is the Logitec HD Webcam C270.  It is a very cheap 720p 3 megapixel webcam. That’s overkill for the telescope, but it’s a good general use webcam and we can use it for video chats and such as well.  This means my solution to attaching the camera to the scope can’t be permanent.

I keep a bunch of 3/4″ PVC pipes and connectors in the garage for prototyping, so I grabbed a 3/4-inch T connector.  This connector can easily accommodate my 1″ outer diameter sighting scope.

webcamTele

The scope doesn’t fit perfectly, so I added some 2mm sticky-backed craft foam for a snug pressfit. (On a side note, I can’t tell you how useful it is having this kind of foam in the toolbox for all sorts of random purposes. I use it all the time)  To accommodate the webcam, I used a hacksaw to cut a portion of the PCV connector off as shown.  Then I wrapped a 3/8″ piece of foam on each of the cut edges of the PVC where it will touch the camera. This will help the camera seat well and stay in place when I attach it to the scope.

3webcamTele  4webcamTele

Finally, I used a smooth “ouchless” hair tie to hold the camera to the PVC tightly and aligned the camera with the hole in the PVC T-joint. Again, believe it or not, these hair ties are pretty useful for random jobs.  In fact, I use a 8-inch smooth headband made of the same material to hold on my cheapo dew shield (more on this in another post.)

5webcamTele  6webcamTele

7webcamTele

 

The final product is easy to use and quite robust. I think it’ll work quite well with my the rest of my setup.  Since I’m still working that all out, I’ll post more as I learn more.8webcamTele

Book Review: Ready Player One

Ready Player One came out in 2011. I had heard great things about it and finally decided to check it out. I’m pretty stingy with my book choices and it is a best seller that has 4.5 stars on Amazon with more than 8,500 reviews and 4.31 rating with over 233,000 ratings and 33,000 reviews on goodReads.

Ready Player One starts out very promising with a good post-apocolyptic cyber-punkish feel set in the year 2044. But it quickly turns into an episode of MTV’s “I love the 80s”. You can’t get 3 sentences without the author name-dropping some 80’s cartoon/movie/actor/band/song.

Synopsis (no spoils):

The premise of the story is that there was a game designer from the 1980s who creates the best gaming system in the world over the next few decades. It is an entire virtual universe. People connect to this virtual universe (called the OASIS) using virtual reality goggles and haptic feedback sensors such as gloves or a body suit. Different worlds seem to be massively multiplayer games that have all the best parts of all the popular PC games such as The Sims, Spore, World of Warcraft, etc.

The creator of this system filled it with 1980s memorabilia. Entire planets are designed to look and feel exactly like the player is re-living the 1980s. The creator has died (being 70-something years old in 2044…) but he hid a special easteregg in the OASIS system. All the players are trying to find it because it will give you control of the entire system. The system is worth billions of dollars. As this is a dystopian novel there’s an evil supercorporation also vying for the easter egg.

That’s all I can say without giving too much away, but what I take issue with is the focus on the 1980s. With more focus on the story or the characters, this could be a great book but instead it’s about 50 pages of story and about 320 pages of 80s references.

Do you remember the 80s? It sucked. The music, the clothing styles, the color schemes used on everything… gah. I’m reminded of it every time I see some hipster doofus with skinny jeans or listen to the radio nowadays. I think of the 80s as almost like the dark ages of style… except you can’t call it the dark ages… maybe the NEON ages. This book simply panders to hipsters that like saying “Hey man, remember Cyndi Lauper’s “Time after Time”? Remember Wham and Devo? Remember the TRS-80 computer? Remember Galaga arcade games? Remember the Goonies? Weren’t those things just the best!?” (This is not hyperbole. These are just a few of the hundreds of needless 80s references, practically 2 per page in the 350+ page book!)

Now I’m all for 80s video games. I’m not a gamer but I appreciate the art and the ingenuity of games. In fact, I prefer some of the 80s era video games over games nowadays because of the programming tricks that were involved to get certain features out of the very limited hardware.  The programmers had to be very clever to even get some systems to draw full screen color graphics. Nowadays, no one thinks much about that kind of thing because every system has gigabytes of RAM, GPUs that can handle all sorts of crazy 3D rendering and multicore processors. While there is certainly amazing work done nowadays, I feel that the 80s were a special time in video game history that should be appreciated. Unfortunately the way this was done in Ready Player One left a bad taste in my mouth.

I’ve heard they are making a movie out of this book. I admit while reading it, I could easily see it as a movie, but this is mainly because this movie has already been made. There’s nothing novel here whatsoever. If you take some random distopian/cyber punk 80s movies, add a dash of video game story lines from over the last 30 years, and a pinch of Hackers (from the mid 90s), mix them all together, you get this movie. I was much more interested in the book/movie The Martian by Andy and hopefully they will make a movie of Wool (the Silo Series) by Hugh Howey which came out around the same time but is much better.

All together I give it a 2 out of 5 rating based on the story itself and for wasting my time with all the reminders of a horrible decade for style (since for some reason I did read the whole book).

</rant>

Appendix:

At random, I flipped to a page in the book (page 106). Here are the references for just that one page. Mind you, this book is set in the year 2044:

  • Dungeons of Daggorath
  • Vector-graphics
  • cassette decks (in this case being used to upload a computer game)
  • Conan the Barbarian
  • Ladyhawke
  • Wizards (I’m counting this because of the context and all sorts of games/movies in the 80s had wizards in them)
  • Dot matrix printer
  • WarGames (the movie)

I just bought a GlowForge laser cutter and here’s why

UPDATE 2017: I canceled my order for the Glowforge as life happens and I couldn’t afford to let them earn interest on my money anymore. I just had to get my money back for life reasons. I do still recommend you get one if you can. I’ve seen them in use in person at Charlotte Latin FabLab and it is really awesome!

Original article below:

So for the last 7 years or so, Jess and I have considered purchasing a laser cutter.  My personal goal is to have my own FabLab. I’m partially there with Jess’s KNK Zing vinyl cutter and my Shapeoko/Xcarve CNC machine.  The two main missing components are a 3D printer and a laser cutter.  Being a FabAcademy alum and running a FabLab at work, I am intimately aware that lasers are the most used (and arguably useful) machine. They are definitely the most fun to play with.  They are also the easiest to make money with (It’s always easiest for me to justify big purchases with the expression “hobbies that pay”). For the past several decades, laser cutters or laser engravers have been used in trophy shops and all sorts of companies. You can use a laser cutter to make products to sell on Etsy (as many people do), make the most amazing personalized birthday and holiday gifts, prototypes of ideas you have, or just make cool stuff for yourself.

I recently saw a new laser cutter on the market and I held back for a while before making the decision to buy it. That may have been a mistake.  The GlowForge is shaping up to be a great machine. I’ve followed it since September, when they were offering 50% discounts on all models.  At the time of this article, they have raised the price to 40% off retail price. And, if you use this referral code, both you and I will get $100 off our orders! (In full discretion, I have had no contact with Glowforge, nor have I actually use the machine myself yet. I’m just really stoked with this machine and its potential. I do have a PhD in Computer/Electrical engineering with Computer Science background and I run an official node of the FabLab network that was started at MIT, so hopefully I’m not off base here… )

There are lots of cheap ( <$15k ) 40-watt laser cutters on the market such as some cheap Chinese ones from Alibaba, or Full Spectrum.  So why go in on a Glowforge?  Well quite simply, it is the best designed laser cutter for FabLab/Makerspace/Hackerspace use. Unlike others in the price range, you don’t need a 5 gallon bucket of distilled water and a fish pump to cool the laser tube (yes that’s a real thing some other models at these prices require and it is ridiculous). It breaks the paradigm of how users interact with a laser cutter.  It is following some of the latest research on user interface and user experience in the field of computer science.  Honestly, those are project I wish I could implement myself but didn’t have the time. It brings together lots of great solutions from these projects and crams it all into a single package.

Paradigm shift #1:   Unlike traditional laser cutters, where you print to the machine like a printer on a network or connected to your computer, Glowforge can be printed from practically any location in the world. This is because the software is cloud-based.  I used to be wary of this kinds of thing, but since Glowforge also promises to make a version of the software open source, you can implement it yourself if you want.

Paradigm shift #2: Glowforge allows you to easily position your designs on your material using a live camera view of the material.  This is a godsend for those who are familiar with the waste of laser cutters.  To be able to make sure a design will fit on a scrap piece of material, you have to do some measurements, hold your tongue just write when pushing the cut button, and hope you remembered to reset the origin (0,0 point) on the laser before cutting.  Sometimes this can be very hard depending on what was originally cut out of the scrap you are using, you might have a weird shaped area and it can be very hard to find out if you can use it to cut a new part.  There are some ideas being researched to handle this kind of situation and other tools you can purchase that are very expensive, but Glowforge has it built in. Being able to literally move my design on top of a video camera image of the material allows me to use as much material as possible without the risk of mis-cutting and having to toss that piece of material and grab a new one.

Another great feature is to simply draw on the material you want with a pen. The cameras will read your design, vectorize it, then the laser will frickin’ cut it exactly as you’ve drawn it.   This is worthy of some type of award because it will save a lot of time for people. I constantly have students who would benefit from simply being able to draw their designs by hand and quickly cut a part out. Again, this feature somewhat comes from newer research into user interface design of laser cutters I’ve been keeping my eye on for some time now.

Paradigm shift 3: Glowforge uses dual cameras inside the cabinet to not only allow you to place your design on the material, but it can conform and auto focus even on non-level materials.  The example on their web video mentions etching a design on a macbook, but this is sooo much more powerful and useful than just that.  Many materials you want to laser, such as a 1/8″ piece of plywood, have a warp to them. If you focus your laser on the low part of the warp, then keep that measurement to cut the whole part, you can end up with edges that aren’t exactly as you had designed them, or edges that are weak due to the wood not ablating and instead burning. This is bad for a couple of reasons. One it can start small fires, but more commonly your edge is brittle and ashy. This changes the workable dimensions of your parts and sometimes makes them unusable.

Also, the cameras can detect materials you put in the machine.  There are barcodes on the materials you buy from Glowforge, but you can make them yourself, which tell the machine what settings to use for engraving or cutting the material. Settings are different for plastics versus wood, etc.  Even different densities of wood matter, so this is a great solution to the problem of figuring out what power and speed settings to set the laser to use.

And finally on this point, it seems there’s also some image recognition. Put your laptop in there and you it’ll detect it’s a macbook and know what settings to use to best etch it. It can even bring up possible designed others have submitted online for you to use if you want.

Paradigm shift 4: The firmware as well as a simplified version of the cloud software will be made open source. This is great because I can hack on it (as I would have done anyway, but at least now I have a much better starting point) . I’m certain a community of hackers/makers will be adding features, which is exciting since this machine is already starting with an impressive set of features.

Paradigm shift 4:  On the Pro version of the machine, you can open the front and back to be able to cut material that is 20″wide, but infinitely long.  This comes from two places, the vinyl cutting machines that are in the market (which can cut a certain width, but practically an infinite length of material from a spool), the Shaper and the awesome Shopbot Handibot (Shoutout to our friends and fellow Carolinian’s; thanks again for the help this past summer in Pittsburgh Salley!), which can do large designs piece-wise. The cameras on the Glowforge can help align the previously lasered portion with your design and make adjustments as needed.   This is incredibly helpful for making sure the  finished product comes out correctly.

Glowforge will also host a libray of other peoples’ designs you can choose from if you aren’t the artistic type.   This is similar to Makerbot’s Thingiverse or Ultimaker’s YouMagine for 3D parts and Inventables’s project section for CNC projects and file, which can be imported into Easel (Inventables’s cloud-based CNC CAD/CAM software for their line of Shapeoko, Carvey, or X-carve machines).

Words of Negativity: For the specs of the machine, the 20″ wide cutting area is slightly awkward and a 24″ width seems more practical. Also, since the Glowforge isn’t out yet, I have to wait. I have to wait to see if it lives up to these expectations, and also wait to play with it myself.

All that being said, the Glowforge sale at this point is a presale. I won’t receive my machine until summer 2016 or later, but you have until the time it ships to cancel your order and get a full refund.  I expect any bugs in the system will be worked out before I get mine and if not, then I’ll have a good excuse to play with it in more depth.

Adam-Atom

Disclaimer: The only affiliate link in this post is for the Glowforge. All other links supplied in this post are to simplify your internet browsing adventure.

How to track your family history free and easy

A few years back, I got interested in my family history.  My grandmother had been telling us all stories for years, but at some point, a switch flipped in my mind and suddenly I had an intense interest in stringing these stories together.  I thought I’d write up my method for researching including some of the tools I use. Please excuse the verbose brain dump but I wanted this page as a reference to be a one-stop shop for myself as well as anyone who is interested in genealogy.

Click here to read the whole article

Read more