Latest updates

This entry was posted in Uncategorized. Bookmark the permalink.

8 Responses to Latest updates

  1. dshinsel says:

    New Source Code posted. Moved to Microsoft Kinect 1.5 SDK, and moved Kinect Depth detection and speech recognition to a C# project, to take advantage of Kinect’s excellent microphone phased array. Works really well. Also added some new tricks. Try “Loki, which way is north?” and “Loki, Follow me” . Look in the Software Design section for Source Code Download, and grab LokiSource2.zip

  2. Harry Shrubshall says:

    Hi David, love the Kinect hookup and results. Completed a Robot using an EZ-B and now want to build an Eddie using MS RDS 4 and Kinect. To keep cost down (avoiding the irobot or parallax board options) I was wondering if you could recommend an I/O Board that would be most suitable. I’m thinking Netduino but not sure. Super thanks for any guidance on this.

    Harry

  3. Robert & Herbert the Robot says:

    Dave,
    I have successfully assembled one arm for Herbert using chain drives and a dc motor etc. Was finally able to get the torque right. This weekend I will be doing some final testing then mounting the arm to the robot; very exciting.
    I do have some questions regarding Loki’s arm.

    Does Loki have a way of matching what is being seen through web cam to a location for the tip of the arm? If yes, then can you please explain.

    How does Loki judge distance for the gripper to pick up something? For example, when you say Loki take this, the gripper approaches the object and then closes. How was this distance judged?

    From what I see, I guess no more Robot Combat League… so sad….

    Bob

    • dshinsel says:

      Hi Robert,
      Data from the Kinect is used to identify objects, and the objects are saved in memory as a location in X,Y,Z relative to robot origin, which is defined as the center of the front bumper of the robot.

      To pick up an object, I calculate the motions the arm must follow using a home-grown “inverse kinematics” routine. Basically, I use kinematics to determine the current X,Y,Z location of the finger tips by calculating distance and angle of every joint in the arm. Next I do a simple model where I increase/decrease each joint in simulation, and evaluate if that would bring the finger closer to the object. Once the routine finishes, I have a complete calculation of the servo positions needed to place the fingers at the center of the object. As the arm moves, the robot continually calculates the X,Y,Z of the finger tip at all times, and can correct if needed.
      See functions m_pArmControlLeft->GetArmXYZ(), and m_pArmControlLeft->CalculateArmMoveToXYZ()

      The arm simply moves into position with the gripper open, then closes when it reaches the correct position.
      In addition, the “fingers” have pressure sensors to tell how tight to grip the object, and the servos have torque feedback as a back up sensing method for if the object was gripped correctly or not.

      Hope this helps!

  4. Mohamed says:

    Hi Dave hate to ask so many questions but I will not be able to use your source codes for my Carbot it has way less features than seeker and a few new ones so i was wondering what program do you he so I can make my own code thanks waiting for your reply

    • dshinsel says:

      If you not a C++ programmer, I think you should check out http://www.ez-robot.com
      The EZ robot stuff looks really good (I have not tried it myself). In particular, teh EZ Builder software lets you setup a lot of cool capabilities for your robot, including vision, wireless control from a phone or tablet, etc.

  5. Mohamed says:

    I want a nice programmer so if c++ is good then I will try and use it where do I have to go to downloadC++

Leave a Reply

Your email address will not be published. Required fields are marked *