Main Menu

Menu

Wednesday 21 December 2011

mbed MicroBridge in Thunder Board

1. Import the mbed MicroBridge project for mbed NXP1768 to the online compiler.
2. Open the Adb.cpp file
3. Go to Line 768
4. Add the following red lines in the code:

................................................
                        //other
                        log("non input,output Endpoint address=%d,wMaxPacketSize=%d,bmAttributes=%d\r\n",input_ep,epDesc->wMaxPacketSize,epDesc->bmAttributes);
                    }
                }
                break;
            default:
                log("unkown desc type(%d) \r\n",descType);
        }
        p+=descLen;
    }
   
    input_ep=129;
    output_ep=1;
    
   
    if (!(input_ep && output_ep)) {
        log("can't find accessory endpoints\r\n");
        return(false);
    }
   
.................
5. Download and run the project to your connected mbed device.


Connections:
I used a mini B breakout of Sparkfun for this project.

Mini USB                  mbed NXP1768

Vcc                             Vout (3.3V Regulated Out)
D-                               D-
D+                              D+
ID                               No connection
GND                           GND (0V)

and the mini USB to the mini USB OTG (ADB) of the Thunder Board.
For testing I connected a potentiometer in pin13 and two LEDS in pin 25 and 26.

Applications in the side of Android:
ServoControl.zip -->http://code.google.com/p/microbridge/downloads/list

Application in the side of mbed (modified as explained above):
MicroBridge Project -->http://mbed.org/users/jksoft/programs/MicroBridge/lwareg

Sunday 18 December 2011

FezBridge in Thunder Board

This project is based on the FezBridge (http://code.google.com/p/robotfreak/wiki/FezBridge). Here you can find information from the original developer. Here we will see some modifications that I made in order the FezBridge to run on the Thunder Board of TechNexion (in Android 2.2 Froyo).
The general concept of the FezBridge is that the Fez Domino will behave as host (server) and the Android system like client. The connection that is established with a mini to mini usb cable (in the ADB of the Android system). The two devices use the TCP protocol for communication.
In the original project they use 1 distance sensor and two servos for testing. The distance sensor tests the communication from the Fez Domino to the Android and the two servos the other direction of communication. As it is being said in the FezBridge project the graphical animation for the distance sensor is not working but we can view the changes of the distance for example from the debug messages in the VS2010.
In this project, I used a potensiometer instead of the distance sensor (changes the level of voltage in the input of Fez Domino (A0)) and LEDs instead of the servos (we can notice the change of the PWM pulse in the Di2 and Di3 of the Fez Domino Board).
For this project you will need the Visual Studio 2010 to program the Fez Domino (general setup instructions: http://www.tinyclr.com/support/) and the ADB to install the Android Application to your board.
Finally, on Android side the ServoControl Application from the MicroBridge project has been used with some modifications (especially in the GUI of the application). 


--Here is the Android Application which you can install it in your board through ADB.
http://www.multiupload.com/480MAP0FKC
--And here is the C# project to program Fez Domino.
http://www.multiupload.com/D1AOL9HJW9
--Here is the a debug output from VS2010 which shows the different values that we take from the Potentiometer (out) and the x,y coordinates which drive the PWM pulse (in).
http://www.multiupload.com/PAYHVBFLZR


Useful Links:
MicroBridge
FezBridge
TinyCLR

The photos below show the project.
Fez with LED and Potentiometer connected for testing purposes

Fez Domino connected to the Thunder Board via ADB


The application on Android

Tuesday 13 December 2011

Read Magnetic Stripe Card in Thunder Board through Mbed NXP1768

Here I will describe how I manage to read the data of a normal bank magnetic stripe card with the Thunder Board, via the mbed NXP1678 module.

The next image depicts the block diagram of the project.

The project has 3 different interfaces.
The first step is to create the interface/driver for the communication between the Device that reads the 2 stripes of the magnetic card with the mbed NXP1768 PIC. In my case the driver can read in both directions the magnetic card. So the two LEDs indicate the direction of successful reading.
After the completion of this step I implemented the interface between the mbed and the Thunder Board which is divided in two parts: a) Serial between the mbed and the FTDI module and b) USB connection between the FTDI module and the Thunder Board.
Finally, to read the string that the mbed is sending to the Thunder Board, which is running Android Froyo, I had to enable the driver during the compilation of the system (FTDI driver), had all the permissions of the specific usb port and finally to set the baud rate of this usb port:
>stty 9600 -F /dev/ttyUSB0.
After setting the baud rate of the serial communication a cat command is enough to read the string that the mbed sends.
>cat /dev/ttyUSB0.

Monday 5 December 2011

Interface DS1621 temperature Sensor to TechNexion Thunder Board (Android)

Here I will describe how I manage to interface the DS1621 temperature sensor from Dallas Electronics to the Thunder Board from TechNexion while this is running
Froyo Android.

The block diagram below describes the connections between the sensor and the Android system.

After having the above schematic ready, it is time for the new kernel-bootloader-rootfs system compilation which will  include the DALLAS DS1621 driver which already exists in the kernel. You just have to include it in your compilation.

Before start the compilation you have to add some code in the 
"board-omap3tao3530.c" which exists in the folder:
"../rowboat-android/kernel/arch/arm/mach-omap2".
In this file find the :

static struct i2c_board_info __initdata tao3530_i2c_3_boardinfo[]={
};

and add inside it the following lines:

#if defined (CONFIG_SENSORS_DS1621)
   {
                    I2C_BOARD_INFO("ds1621",0x49),
                    .type   =  "ds1621",
   },
#else
                    {},
#endif

After that find the line:

omap_register_i2c_bus(3,100,tao3530_i2c_3_boardinfo,ARRAY_SIZE(tao3530_i2c_3_boardinfo)); 

and make sure that is like this.

For this project I use the source from the Rowboat Project.
When you are in the menuconfig of the Kernel you have to go:
"Device Drivers->Hardware Monitor->Dallas DS16212" (enable the driver and the hwmonitoring debugging messages).

After completing successfully the building of the system you have to boot the Thunder Board with your new compilation, which includes now the driver for the DS1621. To boot from the SD card you have to stop stop the auto-booting from the NAND memory, by pressing a button and then insert:

>setenv hh_android_args 'setenv bootargs mem=${mem_size} androidboot.console=ttyS2 console=tty0 console=${console} ${video_mode} ${extra_options} ${network_setting} root=${mmcroot} rootfstype=${mmcrootfstype} init=/init'

>setenv bootcmd 'mmc init; fatload mmc 0 84000000 uImage; run hh_android_args; bootm 84000000'

>boot

At the same time you can browse the folder of the Android SDK in order to use the ADB (Android Debug Brigde) through a USB connection.

Usually is very helpfull to load the busybox in the new Andoid/Linux system.
So, after detecting the Android device from the ADB, using the following command
>./adb devices

you can "push" the busybox in the android device:
>./adb push /home/kostas777711/busybox /data/busybox
if for example the busybox folder is in this location: /home/kostas777711/busybox /data/busybox
Then:
>./adb shell                                             , to enter the shell
>export PATH=/data/busybox:$PATH    , to add it in the PATH of the system
>cd /data/busybox                                  , browse its folder
>chmod 777 *                                         , change the permissions of the whole folder of busybox
>busybox --install                                    , install busybox

Then browse to:
>cd /sys/bus/i2c/devices/3-0049
>ls
withe ls you can see the attributes of the driver. By executing:
>cat temp1_input
you can read the temperature from the temperature register.

With an oscilloscope you can also "read" the SDA pin of the sensor.
In order to achieve this you can create an infinite loop in the Linux command line of the Android device (from the "adb shell"):

#while :
>do
>cat temp1_input
>done

With this you will be able to read the temperature in the "adb shell" and also notice the transmission of data from the oscilloscope.

At the same time that this project is running, I have set up another project mainly for testing using another DS1621 temperature sensor, the mbed NXP1768 PIC and a 4 lines LCD JHD204A. The NXP1768 communicates through an I2C bus with the sensor and parallel with the LCD. In this case, we have to add pull-up resistors in the SDA and the SCL lines.

Here are some pictures of the project:

Android Thunder Board connected with the DS1621 and the testing board (NXP1768, LCD JHD204A & DS1621) and the embedded oscilloscope connected to the SDA of the Thunder Board.

The data of the SDA line.

Display the temperature on the JHD204H LCD.

Monitoring the output from the "adb shell" after the infinite loop.

Thunder Base Board, Tao3530 and 4.3" touchscreen display.

Closer look to the external connections.









Monday 28 November 2011

Burn a pre-compiled .img file in an SD card in Linux

Many pre-compiled images for Android are in this type (.img) after unzipping.
These are instructions for Linux users.

1. Insert the SD card
2. Open the terminal
3. >fdisk -l  , to find out the X in the /dev/sdX
4. >umount /dev/sdX , (run as root)
5. Navigate to the directory of the unzipped image
6. >dd if=name_of_the_image.img of=/dev/sdX bs=1
7. After complition the SD card is ready for booting!

or


Use ImageWriter (as a root user)
>apt-get install usb-imagewriter
for installation and then run the application
>imagewriter (to see the GUI)
insert the image and the location of the SD card and press OK.

Friday 25 November 2011

Install LTIB in Ubuntu 11.04 x64 for i.MX53 QSB

So, general guidelines for installing Linux Target Image Builder in Ubuntu 11.04 x64 for the Freescale i.MX53 Quick Start Board.
1. Go to :
http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=IMX53QSB&fpsp=1&tab=Design_Tools_Tab
Under the Board Support Packages (BPS) download the latest "L2..._ER_source".
2. After downloading the package, un-tar it (decompress it).
3. Now you have to be sure that all the prerequisite apps and libraries are set-up in the host system (currently Ubuntu 11.04 x64).
You can find a lot of documentation for that but it will be better to follow the instructions in the "ltib_build_host_setup.pdf" from Freescale.
You can use the  linux command
>dpkg -l <name-of-package>
to check if the appropriate packages are already install or else the synaptic package manager that the Ubuntu provide.
4. After being sure that the host is ready go back in the un-compressed folder and run:
>./install
Execute this command as a simple user (no root).
Set the installation folder manually (you will be asked by the program).
We will name it <ltib_installation_folder> from now on.
5. After completion do not execute ./ltib in the installation folder.

But make the changes below first.

a)Go to <ltib_installation_folder>/ltib/bin
and replace the fields for "glibc-devel" and "zlib".
For the "glibc-devel" ,replace the existing with this:
    'glibc-devel'    => sub { -f '/usr/lib/libm.so'
                           || -f '/usr/lib64/libz.so'
                           || -f '/usr/lib/i386-linux-gnu/libm.so'
                           || -f '/usr/lib/x86_64-linux-gnu/libm.so'
                       },
and for the "zlib":
    zlib         => sub { my @f = (glob('/usr/lib/libz.so*'),
                                   glob('/usr/lib64/libz.so*'),
                                   glob('/lib/libz.so*'),
                                   glob('/lib/i386-linux-gnu/libz.so*'),
                                   glob('/usr/lib/x86_64-linux-gnu/libz.so*'),
                                   glob('/lib/x86_64-linux-gnu/libz.so*'));
                                   @f > 1 ? 1 : 0 },

b)Go to cd <ltib_installation_folder>/dist/lfs-5.1/mtd-utils
and execute :
cd <ltib_installation_folder>/dist/lfs-5.1/mtd-utils
mv mtd-utils.spec mtd-utils-201006.spec
ln -s mtd-utils-20060302.spec mtd-utils.spec
cd ..
cd ..
cd ..

and now you can execute ./ltib in the installation folder.
Which normally will open the LTIB configuration window.
And that was all !!!

Wednesday 10 August 2011

RGB to HSV method in C#

because i saw many confusing methods, I combined some of them
so i post this one that works very nice:

public static Hsv RGB_to_HSV(Rgb rgb)
       {
           int rgb_max = (int)Math.Max(rgb.Red, Math.Max(rgb.Green, rgb.Blue));
           int rgb_min = (int)Math.Min(rgb.Red, Math.Min(rgb.Green, rgb.Blue));
           Hsv hsv = new Hsv();
           hsv.Value = rgb_max;
           if (hsv.Value == 0)
           {
               hsv.Hue = hsv.Satuation = 0;
               return hsv;
           }
           hsv.Satuation = 255 * (rgb_max - rgb_min) / hsv.Value;
           if (hsv.Satuation == 0)
           {
               hsv.Hue = 0;
               return hsv;
           }
           /* Compute hue */
           if (rgb_max == rgb.Red)
           {
               hsv.Hue = 0 + 43 * (rgb.Green - rgb.Blue) / (rgb_max - rgb_min);
           }
           else if (rgb_max == rgb.Green)
           {
               hsv.Hue = 85 + 43 * (rgb.Blue - rgb.Red) / (rgb_max - rgb_min);
           }
           else /* rgb_max == rgb.b */
           {
               hsv.Hue = 171 + 43 * (rgb.Red - rgb.Green) / (rgb_max - rgb_min);
           }
           return hsv;

       }

Cheers!

Monday 30 May 2011

Detect Red, Green & Blue in RGB format

Color detection is one of the most important image/video processing. Imaging in how many applications the color detection can be useful or critical. I implemented this color detection in Matlab. Offcourse someone can implement it in OpenCV. Maybe this code will come in some days.
Follow this link and you can experiment the RGB format and the colors that it gives for different values:
RGB color tester.


%clear everything previous
clear all;

% read image from desktop ,enter your path and an image
rgbImage = imread('c:/users/cex/desktop/color.jpg');

% display original image in the first figure (array of figures 2x2)
subplot(2,2, 1);
imshow(rgbImage);
title('Original RGB Image');

% split the image into the three color bands
redBand = rgbImage(:,:, 1);
greenBand = rgbImage(:,:, 2);
blueBand = rgbImage(:,:, 3);

% Maximize the window of figures
set(gcf, 'Position', get(0, 'ScreenSize'));

%RED DETECTION
redthreshold = 100;
greenThreshold =100;
blueThreshold = 100;
redMask = (redBand > redthreshold);
greenMask = (greenBand < greenThreshold);
blueMask = (blueBand < blueThreshold);

% Combine the masks to find where all 3 are "true"
redObjectsMask = uint8(redMask & greenMask & blueMask);
% Initialize to black
maskedrgbImage = uint8(zeros(size(redObjectsMask)));
maskedrgbImage(:,:,1) = rgbImage(:,:,1) .* redObjectsMask;
maskedrgbImage(:,:,2) = rgbImage(:,:,2) .* redObjectsMask;
maskedrgbImage(:,:,3) = rgbImage(:,:,3) .* redObjectsMask;
subplot(2, 2, 2);
imshow(maskedrgbImage);
title('Red');

%GREEN DETECTION
% Threshold each color band.
redthreshold = 100;
greenThreshold =100;
blueThreshold = 100;
redMask = (redBand < redthreshold);
greenMask = (greenBand > greenThreshold);
blueMask = (blueBand < blueThreshold);


% Combine the masks to find where all 3 are "true."
greenObjectsMask = uint8(redMask & greenMask & blueMask);
% Initialize to black
maskedrgbImage = uint8(zeros(size(greenObjectsMask)));
maskedrgbImage(:,:,1) = rgbImage(:,:,1) .* greenObjectsMask;
maskedrgbImage(:,:,2) = rgbImage(:,:,2) .* greenObjectsMask;
maskedrgbImage(:,:,3) = rgbImage(:,:,3) .* greenObjectsMask;
subplot(2, 2, 3);
imshow(maskedrgbImage);
title('Green');

%BLUE DETECTION
redthreshold = 100;
greenThreshold =100;
blueThreshold = 100;
redMask = (redBand < redthreshold);
greenMask = (greenBand < greenThreshold);
blueMask = (blueBand > blueThreshold);


% Combine the masks to find where all 3 are "true."
blueObjectsMask = uint8(redMask & greenMask & blueMask);
% Initialize to black
maskedrgbImage = uint8(zeros(size(blueObjectsMask)));
maskedrgbImage(:,:,1) = rgbImage(:,:,1) .* blueObjectsMask;
maskedrgbImage(:,:,2) = rgbImage(:,:,2) .* blueObjectsMask;
maskedrgbImage(:,:,3) = rgbImage(:,:,3) .* blueObjectsMask;
subplot(2, 2, 4);
imshow(maskedrgbImage);
title('Blue');

Saturday 28 May 2011

Computers as Components: Principles of Embedded Computing Systems Design

This book was the first to bring essential knowledge on embedded systems technology and techniques under a single cover. This second edition has been updated to the state-of-the-art by reworking and expanding performance analysis with more examples and exercises, and coverage of electronic systems now focuses on the latest applications. Researchers, students, and savvy professionals schooled in hardware or software design, will value Wayne Wolf's integrated engineering design approach.

The second edition gives a more comprehensive view of multiprocessors including VLIW and superscalar architectures as well as more detail about power consumption. There is also more advanced treatment of all the components of the system as well as in-depth coverage of networks, reconfigurable systems, hardware-software co-design, security, and program analysis. It presents an updated discussion of current industry development software including Linux and Windows CE. The new edition's case studies cover SHARC DSP with the TI C5000 and C6000 series, and real-world applications such as DVD players and cell phones.
Preview this Book

Embedded programming with the Microsoft .NET Micro Framework


Get the information you need for programming applications in the rich, managed-code environment of the Microsoft .NET Micro Framework. You'll learn how to extend your experience with the .NET Framework and Microsoft Visual C# through real-world examples, expert insights, and code samplesand efficiently build robust applications for the smallest devices.Discover how to: Use an object-oriented approach for programming embedded devices Create input and output port objects Develop detailed text and graphical displays that support complex user interactions Add Windows SideShow functionality into your application Implement functionality from existing applications to embedded applications Bind physical hardware events to Windows Presentation Foundation elements Establish embedded-network connections using TCP/IP Use emulation techniques for rapid-prototyping, experimentation, testing, and debugging Optimize performance of resource-constrained devicesPLUSGet code samples in Visual C# on the Web.
There is no Preview for this Book.

Embedded signal processing with the Micro Signal Architecture


This is a real-time digital signal processing textbook using the latest embedded Blackfin processor Analog Devices, Inc (ADI).  20% of the text is dedicated to general real-time signal processing principles.  The remaining text provides an overview of the Blackfin processor, its programming, applications, and hands-on exercises for users.  With all the practical examples given to expedite the learning development of Blackfin processors, the textbook doubles as a ready-to-use user's guide.   The book is based on a step-by-step approach in which readers are first introduced to the DSP systems and concepts.  Although, basic DSP concepts are introduced to allow easy referencing, readers are recommended to complete a basic course on "Signals and Systems" before attempting to use this book.  This is also the first textbook that illustrates graphical programming for embedded processor using the latest LabVIEW Embedded Module for the ADI Blackfin Processors. A solutions manual  is available for adopters of the book from the Wiley editorial department.
Preview this Book

Embedded media processing


A key technology enabling fast-paced embedded media processing developments is the high-performance, low-power, small-footprint convergent processor, a specialized device that combines the real-time control of a traditional microcontroller with the signal processing power of a DSP. This practical guide is your one-stop shop for understanding how to implement this cutting-edge technology. You will learn how to: * Choose the proper processor for an application. * Architect your system to avoid problems at the outset. * Manage your data flows and memory accesses so that they line up properly * Make smart-trade-offs in portable applications between power considerations and computational performance. * Divide processing tasks across multiple cores. * Program frameworks that optimize performance without needlessly increasing programming model complexity. * Implement benchmarking techniques that will help you adapt a framework to best fit a target application, and much more! Covering the entire spectrum of EMP-related design issues, from easy-to-understand explanations of basic architecture and direct memory access (DMA), to in-depth discussions of code optimization and power management, this practical book will be an invaluable aid to every engineer working with EMP, from the beginner to the seasoned expert. * Comprehensive subject coverage with emphasis on practical application * Essential assembly language code included throughout text and on CD-ROM * Many real-world examples using Analog's popular Blackfin Processor architecture
Preview this Book

Programming embedded systems in C and C++


Embedded software is in almost every electronic device designed today. There is software hidden away inside our watches, microwaves, VCRs, cellular telephones, and pagers; the military uses embedded software to guide smart missiles and detect enemy aircraft; communications satellites, space probes, and modern medicine would be nearly impossible without it. Of course, someone has to write all that software, and there are thousands of computer scientists, electrical engineers, and other professionals who actually do. Each embedded system is unique and highly customized to the application at hand. As a result, embedded systems programming is a widely varying field that can take years to master. However, if you have some programming experience and are familiar with C or C++, you're ready to learn how to write embedded software. The hands-on, no-nonsense style of this book will help you get started by offering practical advice from someone who's been in your shoes and wants to help you learn quickly. The techniques and code examples presented here are directly applicable to real-world embedded software projects of all sorts. Even if you've done some embedded programming before, you'll still benefit from the topics in this book, which include: Testing memory chips quickly and efficiently Writing and erasing Flash memory Verifying nonvolatile memory contents with CRCs Interfacing to on-chip and external peripherals Device driver design and implementation Optimizing embedded software for size and speed So whether you're writing your first embedded program, designing the latest generation of hand-held whatchamacalits, or simply managing the people who do, this book is for you.
Preview this Book

Wednesday 25 May 2011

Video Edge Detection In Matlab

This is the code (c+p in the matlab command line):

vid = videoinput('winvideo')
set(vid,'TriggerRepeat',Inf);
vid.FrameGrabInterval = 5;
vid_src = getselectedsource(vid);
set(vid_src,'Tag','motion detection setup');
figure;
start(vid)
while(vid.FramesAcquired<=inf)
data = getdata(vid,2);
diff_im = imadd(data(:,:,:,1),-data(:,:,:,2));
diff_im = edge(rgb2gray(diff_im),'sobel');
imshow(diff_im);
end
stop(vid)

Edge Detection in Matlab through Visual Studio 2010

Here we will see how someone can execute matalb instructions through Visual Studio 2010 (C#).
First of all you have to open a Console Application project in VS2010. To execute this code you have to have installed Matlab in your PC. Also, you have to include the :
using System.Reflection;
and finally save on your desktop (or whereever you like) a JPEG image and load it from there (you'll need the whole path of your image).


The code below in C# is leading in the same results as executing in the Matlab Command line the following:
I = rgb2gray(imread('C:\users\cex\desktop\1.jpg'));imshow(I);O = edge(I,'sobel');imshow(O);
Processed JPEG

Original JPEG







So, the code in C#, in the Console Application will be:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;

namespace matlab1
{
    class Program
    {
        static void Main(string[] args)
        {
            //Get the type info

            Type matlabtype;
            matlabtype = Type.GetTypeFromProgID("matlab.application");

            //Create an instance of MATLAB

            object matlab;
            matlab = Activator.CreateInstance(matlabtype);

            //Prepare input as an object

            object[] arrayInput = new Object[] { "I = rgb2gray(imread('C:/users/cex/desktop/1.jpg'));imshow(I);O = edge(I,'sobel');imshow(O);" };

            //Call MATLAB method

            matlabtype.InvokeMember("Execute", BindingFlags.InvokeMethod, null, matlab, arrayInput);

            Console.ReadKey();


        }
    }
}
debugging this piece of code we will take open the matlab command window and the processed figure as shown below:

Monday 23 May 2011

Image Dilation & Erosion using OpenCV

In this example I am using the OpenCV libraries from the Visual Studio 2010 programming in C++.
In this application we will erode and dilate an image. The code is clear and commented.

#include "stdafx.h"

#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
#include "cxcore.h"
#include "highgui.h"

int _tmain(int argc, _TCHAR* argv[])
{
        int iterations=1;

       //initialise the 3 images
      
        IplImage* source_image = NULL;
        IplImage* dilated_image = NULL;
        IplImage* eroded_image = NULL;

        //create 3 windows and their names
        cvNamedWindow("Source Image", 1);
        cvNamedWindow("Dilated Image",1);
        cvNamedWindow("Eroded Image",1);

        //load original image
        //if the name of your project is pr1
        //put a JPEG image in the pr1/pr1 folder
        //that the vs2010 will create
        source_image = cvLoadImage("1.jpg",1);
        cvShowImage( "Source Image", source_image );

        //make a copy of the original image
        dilated_image=cvCloneImage( source_image );
        eroded_image=cvCloneImage( source_image );

        //dilate image
        cvDilate(source_image,dilated_image,NULL,iterations);

        //erode image
        cvErode(source_image,eroded_image,NULL,iterations);

        //Present the processed images
        cvShowImage( "Dilated Image", dilated_image );
        cvShowImage( "Eroded Image", eroded_image );

        //Waits for a pressed key (0 delay)
        cvWaitKey(0);

        //destroys the window with the given name
        cvDestroyWindow( "Source Image" );
        cvDestroyWindow( "Dilated Image" );
        cvDestroyWindow( "Eroded Image" );

        //Deallocates the image header and the image data
        cvReleaseImage( &source_image );
        cvReleaseImage( &dilated_image );
        cvReleaseImage( &eroded_image );

        return 0;
}

The output of the above code:



4 iterations applied
One iteration applied

Sunday 22 May 2011

Call Matlab from Visual C# 2010

First of all you have to include:
using System.Diagnostics;

and then add a button:







Double clicking the new button Matlab, the automated code in red will be produced and it is time to add the following code (in blue):

        private void button16_Click(object sender, EventArgs e)
        {

            Process myProcess = new Process();
            //Here you use your own application with full path
            myProcess.StartInfo.FileName = @"C:\Program Files\MATLAB\R2010a\bin\matlab.exe";
            myProcess.StartInfo.CreateNoWindow = true;
            myProcess.Start();
        }

and that is all folks.

Sunday 15 May 2011

A little of history: Intel 4004

The Intel 4004 was a 4-bit central processing unit (CPU) released by Intel Corporation in 1971. It was the first complete CPU on one chip, and also the first commercially available microprocessor.
(http://en.wikipedia.org/wiki/Intel_4004)

Background Subtraction Using OpenCV libraries and DevC++ Compiler

I implemented the background Subtraction using the ready functions of the OpenCV.
From the code below you can notice details in the code. Also, I implement the background Subtraction
using a two stage of erosion and dilation in the end of the processing.


#ifdef _CH_
#pragma package <opencv>
#endif

#define CV_NO_BACKWARD_COMPATIBILITY

#ifndef _EiC
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
#include <ctype.h>
#endif

IplImage *image = 0, *frameTime1=0, *frameTime2=0, *frameForeground=0, *img1=0, *img2=0;

int main( int argc, char** argv )
{
    printf("Press ESC to Close.\n");
    CvCapture* capture = 0;    //Video capturing structure
    capture = cvCaptureFromCAM( -1 );    //Initializes capturing a video from a camera   
    if( !capture )
    {
        fprintf(stderr,"Could not initialize capturing...\n");
        return -1;
    }
    cvNamedWindow( "Camera", 1 );     //create the window for the Camera Output (Directly)
    cvNamedWindow( "frameForeground", 1 );
    while(1)
    {
        IplImage* frame = 0;   //every time create/initialize an image (which name is frame) to process
        int  c;                //integer to exit program
        frame = cvQueryFrame( capture );   //grabs and returns a frame from a camera input
       
        if( !frame )    //if there is no frame exit the while(1)
            break;
        if( !image )    //if there is no image, do the followings
        {
            /* allocate all the buffers */
            image = cvCreateImage( cvGetSize(frame), 8, 3 ); 
            frameTime1 = cvCreateImage( cvGetSize(frame), 8, 1 );
            frameTime2 = cvCreateImage( cvGetSize(frame), 8, 1 );  
            frameForeground = cvCreateImage( cvGetSize(frame), 8, 1 );
            img1 = cvCreateImage( cvGetSize(frame), 8, 1 );  
            img2 = cvCreateImage( cvGetSize(frame), 8, 1 );    
        }
        cvCopy( frame, image, 0 );  
        cvCvtColor( image, img1, CV_BGR2GRAY ); 
        cvCopy( img1, frameTime1, 0 );    //currently frame in grayscale
        cvAbsDiff(
                  frameTime1,
                  frameTime2,
                  frameForeground
                  );
                 
        cvThreshold(
                  frameForeground,
                  frameForeground,
                  10,
                  255,
                  CV_THRESH_BINARY);       
        cvErode(
                frameForeground,
                frameForeground,
                0,
                1);     
        cvDilate(
                frameForeground,
                frameForeground,
                0,
                1);      
        cvDilate(
                frameForeground,
                frameForeground,
                0,
                1);      
        cvErode(
                frameForeground,
                frameForeground,
                0,
                1);
        cvShowImage( "Camera", image );  //displays the image in the specified window 
        cvShowImage( "frameForeground", frameForeground );

        cvCopy( frameTime1, frameTime2, 0 );

        c = cvWaitKey(10);     //waits for a pressed key
        if( (char) c == 27 )  //if key==ESC (27 ESC button) then break
            break;
    }
    cvReleaseCapture( &capture );  //Releases the CvCapture
structure
    cvDestroyWindow("Camera");
    cvDestroyWindow("frameForeground");
    return 0;
}
#ifdef _EiC
main(1,"camshiftdemo.c");
#endif



Snapshots of the processed video:

Image/Video Processing using OpenCV and DevC++ Compiler

Follow OpenCV Installation Guide http://opencv.willowgarage.com/wiki/InstallGuide (more general instructions)
and find out how to configure Dev-Cpp under windows for compiling C programs using the OpenCV Library
(http://opencv.willowgarage.com/wiki/DevCpp).
 From the main page you can access helpful documentation for using effectively the libraries.
Very useful also is the book :

Learning OpenCV: Computer Vision with the OpenCV Library

 Be careful is a little tricky to make it work but I am sure that in the end you will make it.

Once you are done it's time to play! I am sure that you are gona find it very easy and handy to use!

P.S. You can also make the OpenCV work with other compilers (like Visual Studio, Eclipse etc)!

Read WebCam in Matlab

The following code inputs the WebCam stream to Matlab in a new figure:

video = videoinput('winvideo')
set(video,'TriggerRepeat',Inf);
video.FrameGrabInterval = 5;
vid_src = getselectedsource(video);
set(vid_src,'Tag','motion detection setup');
figure;
start(video)
while(video.FramesAcquired<=inf)
data = getdata(video,2);
diff_im = imadd(data(:,:,:,1),-data(:,:,:,2));
imshow(diff_im);
end
stop(video)

Search each command in Matlab for more details (help command)!



or just

video1 = videoinput('winvideo');
preview(video1);

to preview the input:





Edge Detection using Sobel Method in Simulink

Processing a video/image using Simulink is an easy job, especially when compared to
real-time processing of embedded DSP systems. By knowing the functionality of each block
you can reach the disired result quickly.
An example is following (using Matlab's Simulink):