Menu and widgets.
Building an Internet-Connected, AI-driven Candy Dispenser with OutSystems.
A Lighthearted Look at Adding AI to your Apps.
If you’re like my boss, you’ve probably thought to yourself, “what the world needs right now is an AI-driven candy dispenser .” OK, maybe that wasn’t exactly what he was thinking, but that’s where we ended up, when he had the idea to demonstrate the power of AI, using an unconventional demo approach.
To make this work, he purchased a simple mechanical food dispenser, shown in the image below:The Starting PointThis dispenser had the advantage of being inexpensive, and more importantly, it had a removable handle and shaft, which could be replaced by a stepper motor (which can be moved with precision using a variety of microcontroller boards or a Raspberry Pi) , along with some 3D printed parts to mate the motor to the paddle wheel used to dispense the contents.
That’s where I came in.
Automating the Dispenser.
Before we could demonstrate the power of adding AI to an OutSystems app
we needed the candy dispenser to be controllable and automated.
As noted, this involved adding a stepper motor similar to the one below, to control the paddle wheel:NEMA 17 Stepper MotorTo mate the stepper motor to the dispenser, I designed and printed a shaft adapter, and a bracket.
To enable the stepper motor to be controlled by the OutSystems app
I purchased a $20 Particle Photon microcontroller *, which has built-in Wi-Fi support, and is programmable using familiar Arduino code, along with a $5 stepper motor driver, which simplified the programming of the device.
Wiring was accomplished using a breadboard and jumper wires, which is pretty common for electronics prototyping .
For fun, and to provide a visual indication of when the device receives commands from the app, I added a multicolor LED ring to the device as well.* An earlier version of the dispenser used a Raspberry Pi and Python code, and was a little more complex.
The Particle Photon was both cheaper and easier to use, and also had a built-in SDK that enabled calling functions on the device directly as REST API methods.
I won’t spend a lot of time on the device itself (I’m planning a follow-up post that goes into more detail on the hardware, and the firmware that allows the Candy Dispenser to be addressed via REST APIs), but at the end of the build process, I had a Candy Dispenser that could connect to the internet , and receive commands via REST.
There’s a parts list for the dispenser in the description of the YouTube video at the end of this post, for those who are interested.
The Mobile App – Hardware and AI.
Once I set up the Candy Dispenser hardware to respond to REST API calls.
The next step was to create an OutSystems mobile app to consume these REST APIs
and call them based on inputs from device hardware and AI analysis, for the following use cases:NFC for reading text embedded in NFC tags: When the text read from an NFC tag matches the target value in the app settings, dispense candy.
AI Text Analytics for determining the sentiment (positive/negative) in a given text string: If the sentiment is positive, dispense candy.
AI Emotion Detection, leveraging native camera hardware, and Azure Cognitive Services: Evaluate a picture taken from the device camera to determine the emotion of the person in the picture: If the person is sad or angry, dispense candy.
In each case, if the target criteria for the use case is not met, the app calls a REST API that tells the LED ring on the candy dispenser to display red, which provides an additional indication that the REST call succeeded.
Accessing Native Hardware.
OutSystems mobile apps are built on top of Cordova
so can leverage Cordova plugins to provide native access to device hardware.
These plugins are available to developers in the open source Forge library for download and inclusion in their apps.
You can find Plugins using the Forge tab in the OutSystems Service Studio development environment, as shown below, or via the Forge website:Searching the Forge in Service StudioOnce you find the plugin you want, you can install it directly from Service Studio, and the plugin is then available for any application in the server environment where it was installed (note that the Camera plugin shown below is already installed…if it was not, an Install button would appear):Camera PluginAfter you install the plugins, you can add them to an application using the Manage Dependencies window, accessed by the plug icon on the toolbar, or Ctrl+Q:Adding the NFC plugin as a dependencyWhere the plugin appears after it’s added as a dependency varies based on the functionality provided by the plugin.
Most will appear in the Interface tab (for plugins that provide UI-related or screen-related functionality), the Logic tab (for plugins that provide client Actions that can be used in the app), or both.
For the NFC functionality, I wanted the app to respond to the reading of an NFC tag, and the plugin provides two options for this, MimeTypeListener and NdefListener.
I used the latter, which defines an event that is fired when the NFC radio in the device detects and reads an NFC tag.
To respond to the event, I created a client Action that handles the event, and receives a list of the text records stored on the tag.
The client Action, shown below, checks the first text record (I’ve made the assumption that there will be only one text record) against an app setting stored in local storage, and if it matches, calls the REST API to tell the Candy Dispenser to dispense candy (technically, the client Action is calling a server Action that wraps the REST API call, but the end result is the same).
Client Action for NFC Tag ReadWorking with the device Camera is just as easy.
In the CheckMyMood screen, I used an Image control to display a neutral face image.
I added an OnClick event handler to this image, which is executed when the user taps the image.
The client Action checks the status of the Camera plugin, and assuming it’s available, calls the TakePicture Action from the plugin, which I simply dragged into the desired spot in the Action flow.
The TakePicture Action only opens the camera UI.
No picture is taken unless the user actively chooses to do so.
Once the picture is taken, the image data is submitted to Azure Cognitive Services (more on this shortly), which returns an estimate of the emotions displayed in the image.
If the emotions indicate sadness or anger, the app tells the dispenser to dispense candy.
If not, a message is displayed indicating that happy people don’t need candy.
The OnClick client Action is shown below:OnClick Client ActionThe last use case, analyzing text for sentiment, does not require any device hardware, and simply uses a text box and a button on the screen.
The button invokes an action, .
Which submits the text from the text box to the OutSystems
AI Language Analysis plugin’s DetectSentimentInText Action
which again I simply dragged and dropped into the client Action logic flow, as shown below.
I arbitrarily chose 0.50 as the breaking point between positive and negative sentiment, and dispense candy for a positive sentiment, and no candy for negative:Negative Sentiment.
No Candy for You!AI Integration.
Both the emotion detection and sentiment analysis use cases rely on AI to drive the outcome.
AI functionality is easy to add to an OutSystems application
leveraging a variety of connectors and components in the OutSystems Forge, including OutSystems.
AI Chatbot and OutSystems.
AI Language Analysis, as well as Azure Cognitive Services, Azure Bot Framework, Amazon Rekognition, and more.
For the candy dispenser, .
I installed both the OutSystems
AI Language Analysis component, and the Azure Cognitive Services Connector from the Forge, and added them to my mobile app as dependencies.
Configuring these components is pretty straightforward.
You do need to set up an instance of the appropriate Azure service (most offer free plans to start with), and add the subscription key from the service instance to the appropriate site property in Service Center.
This process is documented in the following articles:Configuring and using OutSystems
AI Language Analysis.
Configuring and using Azure Cognitive Services.
Once the AI service instances have been set up, the last step is to provide the relevant plugins with the necessary information to connect to the AI services, which in the case of the two services I’m using is as simple as adding values to Site Properties representing the API key provided by the relevant service.
In my case, I just opened up the AzureCognitiveServicesConnector module (not the Application…Site Properties are configured at the Module level), and set the values for the Face API and Text Analytics API keys, .
As highlighted below (note that the OutSystems
AI Text Analysis plugin is a wrapper around the Azure Cognitive Services Text Analytics that makes it simpler to use):Azure Cognitive Services API KeysWith the keys configured, the mobile application is complete, and ready to test.
Here’s a video demo of the completed application…Enjoy.
Candy Dispenser DemoWant to see the Candy Dispenser in action in person.
Keep an eye here for upcoming events where I’m showing it off, and watch my Twitter account for announcements as well.
January 9, 2020January 9, 2020 Low Code Arduino, Fun, , Photon Leave a comment on Building an Internet-Connected.
AI-driven Candy Dispenser with OutSystems New Role
New Topic – Low Code.
Finding Low Code…or Did It Find Me?.
Ever run into one of those technologies that snags you immediately.
That’s what Low Code recently did for me.
Mobile Apps are Beautiful and Pixel-perfect on OutSystemsAs many regular readers will know, I’ve spent the last several years doing independent consulting and coding, mostly in the.
And I was humming along, with the usual ups and downs that go with being independent, but mostly happy with what I was doing.
Then a friend reached out, and mentioned that someone he’d worked with in the past was looking for someone with Microsoft stack architect skills, and would I be interested in talking with him?I wasn’t looking for anything new.
But I thought, can’t hurt to take a call, right.
Which is how I was introduced to OutSystems.
The initial call was encouraging, so I decided to take the OutSystems platform for a spin, and I was immediately impressed by just how fast I could build a fully-functional application, using just a visual approach of assembling UI and logic widgets, quickly creating and querying data entities, and rapidly publishing new versions in an agile manner.
Finding the Fun Again.
My immediate reaction was that it reminded me of some of the best of what I fell in love with when I first started using Visual Basic back in the late ’90s.
But without all the code, and with a much more robust publishing and administration infrastructure behind it.
I found that at every turn, from automatically inferring data types for attributes based on the names you give them, to rapidly creating basic list and detail pages by simply pulling a data entity onto a design surface, the tools provided by OutSystems made building apps faster and easier than I was used to…and dare I say it, more fun.
To make a long story short, I decided to continue pursuing the conversation with OutSystems.
After a few more calls and interviews, I accepted a role as a Solution Architect.
I’ve been in that role for about 6 months now, and nothing I’ve learned in that time has diminished my feeling that Low Code, particularly with OutSystems, is a game-changer for application development.
I’ll be sharing more in the coming weeks and months on the whys and hows.
Can’t wait to see what Low Code is all about, and how it works in OutSystems.
Check out the OutSystems 2-minute Overview below:If you’re a passionate technologist, and this has sparked your interest…we’re hiring.
Contact me, and I’d be happy to put you in touch with our great recruiting team.
February 21, 2018 Low Code Low Code Leave a comment on New Role, New Topic – Low Code Custom Domains the Easy Way in Azure Web Apps.
One of the best things about cloud development today is the low cost of entry.
With cloud vendors competing to bring customers to their offerings, there are strong incentives to keep prices low, particularly at the entry level.
Microsoft’s Azure offerings are no exception.
You can get started with Azure Web Apps, whether for hosting a blog or a more full-featured application, for free, if you’re willing to accept the limitations of the free plan.
One of those limitations is that the free offering for Azure App Service does not support the use of custom domains.
So any site or app you host using the free plan must use a subdomain of the azurewebsites.net domain, such as myreallycoolsite.azurewebsites.net.
For development and testing, or for hosting an API that will only be called programmatically, this is no big deal.
But for public facing sites, you’re going to want a custom domain.
Read on to learn how easy Microsoft has made that with Azure Web Apps.
Continue reading Custom Domains the Easy Way in Azure Web Apps September 12, 2016 Azure, Azure, Leave a comment on Custom Domains the Easy Way in Azure Web Apps Save Time and Keystrokes with Emmet in Visual Studio Code.
It’s been more than 8 years since Jon Udell posted an encouragement of blogging over email entitled “Too busy to Blog.
Count your keystrokes” and over 5 years since Scott Hanselman followed up with “Do they deserve the gift of your keystrokes?” Both posts explore the idea of our keystrokes being a limited resource that is better used to contribute to knowledge sources like blogs or wikis that are available to large numbers of people, rather than replying to a much more limited audience via email.
In this post, I’ll introduce you to one of my favorite new helpers, Emmet in Visual Studio Code, and show you how it helps me save keystrokes when working with HTML markup.
Continue reading Save Time and Keystrokes with Emmet in Visual Studio Code September 9, 2016 , , , , , Typescript, 7 Comments on Save Time and Keystrokes with Emmet in Visual Studio Code Visual Studio Code Hits the 1.0 Milestone.
I must have missed this while avoiding the interwebs around April Fool’s Day, but apparently Visual Studio Code is no longer beta/preview, and has hit their 1.0 version milestone.
UPDATE: I was confused when reading the update log, which had the 1.0.0 update listed as March 2016…this must’ve been referring to the preview 1.0 release.
Thus the correction above.
The official public 1.0 release was yesterday, so I didn’t miss it after all.
Details below the fold… Continue reading Visual Studio Code Hits the 1.0 Milestone April 14, 2016 , node, Leave a comment on Visual Studio Code Hits the 1.0 Milestone Top 5 Reasons to Speak at NoVA Code Camp!.
…or your local user group, meetup, or code camp.
Becoming a Speaker.
As someone who’s been speaking on technical topics since the late 1990s, I can say with great confidence that there are huge benefits to sharing your knowledge at local code camps and user groups.
And if you’re in the greater Washington, DC metro area, I want to encourage you to submit a talk for the Northern Virginia Code Camp, which is coming up on April 30th, 2016.
Here are 5 reasons to speak you should consider: Continue reading Top 5 Reasons to Speak at NoVA Code Camp.
March 15, 2016 Code Camps, Community, Presentations, Leave a comment on Top 5 Reasons to Speak at NoVA Code Camp.
Sleep equivalent in UWP.
Wanted to share a quick solution to an issue I ran into while working on a Universal Windows Platform (UWP) app for my Raspberry Pi 2.
I was building an app to read sensor data from a.
NET Gadgeteer TempHumidity module using the GHI Electronics FEZ Cream, which is a HAT (Hardware Attached on Top) for the Raspberry Pi 2 that allows the use of Gadgeteer modules.
In my case, I’m running Windows 10 IoT Core on my Pi 2, so that I can stick with programming in C#.
The original driver included a call to Thread.
Sleep, which it turns out is not available in a UWP app.
For Gadgeteer modules that are directly supported (i.e.
with drivers that have already been ported to work with Windows 10 IoT Core), integrating them into a UWP project is as simple as downloading the relevant NuGet packages.
However, in my case, it turned out that the temperature and humidity sensor I was using was an older model which was not directly supported.
The good news is that since GHI makes their Gadgeteer mainboard and driver code available on Bitbucket, it was easy to find the driver code for the sensor I’m using and work on a port to work on the Pi.
Continue reading Thread.
Sleep equivalent in UWP February 16, 2016 C#, Gadgeteer, , Raspberry Pi 2 Comments on Thread.
Sleep equivalent in UWP Troubleshooting Web API and Angular 2 beta.
Just ran into an issue with some Web API and Angular 2 code I’ve been working on, and since there didn’t seem to be much info in the wild on the error I ran into, I figured I’d blog it, in case it might help someone else.
A Simple Demo of Web API and Angular 2?.
Since I had the day off yesterday, I figured it might be a good day to jump in and start doing some work with Angular 2 (Hey, isn’t that what you do with your day off.
Of course, I’d already run through a number of tutorials that dealt with hard-coded collections of data, so I figured it was time to build something that could retrieve data from an API.
Continue reading Troubleshooting Web API and Angular 2 beta January 20, 2016 , Errors, Troubleshooting, Web API 1 Comment on Troubleshooting Web API and Angular 2 beta Multiple Monitors in Remote Desktop with Windows 7 Pro.
The Best Laid Plans….
I’ve recently transitioned from working at home to working on-site at a client.
The client did a great job of provisioning a nice desktop PC and large dual monitors.
But one of the things I missed from my home office was my standing desk.
To remedy this, I planned to bring in my laptop, set it up on a stand, and re-purpose one of the two monitors they provided so I could use Remote Desktop to connect to the desktop PC and still enjoy dual monitors…but there was a small wrinkle in my plan.
Continue reading Multiple Monitors in Remote Desktop with Windows 7 Pro November 18, 2015 , Leave a comment on Multiple Monitors in Remote Desktop with Windows 7 Pro Community Megaphone and a Post-INETA Future.
As I noted a few months ago, INETA North America is ceasing operations and wrapping up loose ends.
As part of that wrap-up, the INETA board asked if I would be willing to help with community continuity through the website I created, Community Megaphone.
The idea was for INETA to encourage folks on their mailing list to join a list I set up to discuss the future of Community Megaphone, and what kinds of features might help fill some of the gaps left behind by the end of INETA North America.
With this post, I’d also like to offer others in the developer community the same opportunity.
You can join the mailing list, which is for the purpose of providing ongoing updates on the future plans for Community Megaphone.
And if you don’t want to join a mailing list, but still want to provide feedback or ideas on features that would be useful for user group leaders, speakers, and attendees of developer community events, you can do so on the Community Megaphone Uservoice page, or the Feedback page on the Community Megaphone site.
I’m looking forward to the feedback of the community, and finding better ways to serve the developer community, and I hope you’ll share your feedback, too.
November 16, 2015September 6, 2018 Community, Community Megaphone 1 Comment on Community Megaphone and a Post-INETA Future Posts navigation.
1 2 … 17 Next page.