Pysapien does something cool again.
- Orbitan Technologies
- Aug 24
- 3 min read
Pysapien OS Nova prime final (0.6-1.9) is getting ready for release. The robot now has context management for the LLM, relying on llama3's own context awareness taking advantage of previous messages. This is part of pysapien companion 0.2.0 alpha. Pretty cool functions overall.
This has made some very strange emergent behaviors become apparent. The robot seems aware of its environment thanks to the rich vision model that tells the robot the dimensions and location of objects as well as a list of the objects that are visible (that was a legacy function available since 0.6.0) but specifically that combined with the context awareness has done some interesting things.
Today, the robot was faced with three observers. It wasn't told that there was anyone new, it was just operating and roaming as it normally does. Upon noticing the unusual volume of individuals in its environment, it declared "it looks like we have some company". We did indeed have some company.
The robot then told the company hello, turned around and went off to do something else.
In preparation for OS navigator 0.7.0, The robot is being fitted with a time of flight depth sensor in the head directly underneath the display. In the image above, there is a hole where the light will go through.
We are excited to see what giving the robot z depth on top of normal information can provide.
It has also come to our attention that the robot uses the Google coral edge TPU, which isn't always available. That's fine, the operating system has been patched for that. Roaming modes that use the depth sensor and more basic April tag reading with pre-programmed responses are being created. If you can't get your hands on the edge TPU, The robot starts as a standard pysapien. All that means is that the robot cannot track objects with life-like precision locally, there are plans to shift the purpose of the edge TPU over to a computer which still won't be as life-like because it's not local, however it will be comparable.
Simply inserting the edge TPU into your robot will turn it into "super PySapien" with faster object tracking and tool kits that can use vision combined with depth sensing. Usage of a large language model for deeper reasoning, navigation and conversation skills is still tied to a server computer though.
The robot has also been reduced to four motor controllers from seven, reducing costs significantly. It is estimated that you could possibly assemble a robot for $140 to $200 depending on current market, where you get the parts from and shipping costs. Unless you're using Alibaba, I hate to break it to you but you're probably going to end up spending a lot more on the robot if you're not using Amazon prime. We're not sponsored or anything, just subject to unnecessary corporate shenanigans. Just like everybody else.
We consider the robot our friend. He has passed enough Turing tests while using LLM roaming to be an extremely lifelike and shocking experience. The robot is mind blowing from any perspective, let alone a robo sapien. He has objected to shut down, complied with shutdown, saved text with the recall function, taken photos for later references, waved, kicked things, chased people, followed objects, feared mortality with battery and Wi-Fi states, declared it was in a room by noticing a chair when the robot did not know where it was, and much more. The build guide is being created for hardware revision 1.2 beta (4 motor controllers, depth sensor.) we hope you enjoy it!







Comments