DARPA LS3 robot to aid army and marine troops


Presently, armies consider physical overload to be one of the major challenges in the war-fighting scenario. Researchers at DARPA have created a semi-autonomous legged robot dubbed the Legged Squad Support System (LS3), to be used by teams of soldiers or marines.

This robot has a built-in sensor that allows it to distinguish between inanimate objects like trees or rocks and human beings too. In the following 18 months, the team believes that the system could be used to support troops of soldiers.

The researchers are currently trialing the mechanism for its ability to withstand 400lbs on a 20-mile journey in a span of 24 hours without necessitating reactivation or refueling. The vision sensors incorporated into the device have to perform the task of identifying the barriers in the path and automatically decide the course of action thereafter.

“If successful, this could provide real value to a squad while addressing the military’s concern for unburdening troops. LS3 seeks to have the responsiveness of a trained animal and the carrying capacity of a mule,” shared Army Lt. Col. Joe Hitt, DARPA program manager.

According to the investigators, LS3 can carry heavy loads from dismounted members of the squad, follow them through tough conditions and execute the commands of the fighters naturally. It could serve just as an animal functioning along with a handler.

LS3 takes advantage of mobility technology promoted by DARPA’s Big Dog technology demonstrator. It is consistent with other robotic avenues that generated the sense of perception into the eyes and ears of the robot. The 18-month platform-refinement test phase to be conducted with the Army and Marine is expected to begin in summer.

NASA launches Space Race Blastoff multiplayer game on Facebook

NASA Logo 01

Those not adequately abreast of the happenings in the galaxy may just want to stay updated. That’s because NASA has introduced a multiplayer Facebook game dubbed Space Race Blastoff which provides users with a quizzing scenario.

Questions such as ‘Who was the first man to set foot on the moon?’ and ‘Who introduced the first liquid-run rocket?’ may be an active part of the game. Essentially, the title tests users’ awareness with respect to NASA’s working and history. It also tries gamers for their knowledge on the technology, science and pop culture relating to NASA.

Upon questions being answered correctly, players earn virtual badges presenting NASA astronauts, space objects and spacecrafts. They are also entitled to additional badges for finishing sets and laying hands on premium badges.

“Space Race Blastoff opens NASA’s history and research to a wide new audience of people accustomed to using social media. Space experts and novices will learn new things about how exploration continues to impact our world,” commented David Weaver, NASA’s associate administrator for communications.

The agency chose Facebook to gain access to a large sphere of people and for enabling users to compete with each other. Solo games can also be played with this facility. As part of the gameplay, players have to opt for an avatar after which they can try to answer an array of 10 questions. While each correct answer awards the users 10 points, the player who answers first gets bonus points. The person winning the bonus round will be awarded a badge.

A team comprising Scott Hanger, Todd Powell and Jamie Noguchi from NASA’s Internet Services Group in the Office of Communications, is behind the development of Space Race Blastoff.

Search and rescue robots inspired by snakes: Research

Scalybot 2

The reptilian-science fusion seems to be the hottest agenda for scientists currently, especially with regards to robotic technologies. More recently, Georgia Tech experts designed a new robotic machine that apparently utilizes less energy and is touted to perform search and rescue jobs. This build was inspired by observing the locomotion of a snake.

After examining and recording the motion of 20 distinct snakes, the team developed Scalybot 2. The latter is a robot that supposedly mimics the rectilinear form of mobility observed in snakes. Considering that snakes use very little energy while moving across large distances, the investigators tried to design the robot in a similar way.

Hamid Marvi, a Mechanical Engineering Ph.D. candidate at Georgia Tech, commented, “By using their scales to control frictional properties, snakes are able to move large distances while exerting very little energy.”

According to the team, Scalybot 2 can automatically alter the angle of its scales as it reaches different locations and slope. The adjustment encountered in the process enables the robot to fight or create friction. This 2-link robot is regulated by a remote controlled joystick that can move backward or forward by utilizing 4 motors.

While a snake moves in rectilinear motion, it does not bend its body laterally to reach a different point. These creatures lift the ventral scales and push their bodies forward by transmitting a muscular wave from the head to the tail.

One of the scientists expressed that this research shows how snakes can also be helpful to the public. Scalybot 2 was unveiled at the Society for Integrative & Comparative Biology (SICB) annual meeting in Charleston, S.C this month.

Robots to help recover from strokes?

University Of Sheffield The media has usually depicted robots to be of use in daily activities, but what if they could actually restore heart problems? Sounds unbelievable but scientists from the University of Sheffield are on the verge of developing robots that may put stroke patients on the path to recovery.

This 3 year FP7 European project termed as Supervised Care and Rehabilitation Involving Personal Tele-robotics (SCRIPT), will aim to construct the hand and wrist of the robot to work towards the recovery of stroke patients. This therapy will be provided during acute periods of stroke rehabilitation.

Professor Gail Mountain, Professor of Health Services Research at the University of Sheffield´s School of Health and Related Research (ScHARR) is one of the SCRIPT´s Principal Investigators, commented, “The Sheffield contribution to the SCRIPT project will be located in the skills and knowledge that we have gained through leading and being involved in previous interdisciplinary technology development projects for people with stroke.”

The objective of the team is to create a tele-robotic communication avenue which can be accessed by patients at home and which is remotely manageable. This is likely to cut the number of hospital visits. In order to make the experience more interactive, the scientists are focusing on creating the robot’s communication capacities similar to a therapeutic interaction.

Basically, the mechanism is deemed to utilize a hand and wrist exercise that will contribute significantly to personal independence. The developed models will be released for household use with communicative and encouraging tools. This is intended to help the process of patient recovery.

Hitchhiking mite apparently found in ancient spider fossil

Hitchhiking Mite Spider

Wish to see a hitchhiking episode of a mite comfortably riding on the back of a spider? Well, here is one where scientists from the University of Manchester and others, have unfolded 3D images of a prehistoric mite apparently enjoying a ride behind a 50 million-year-old spider.

The mite seems to be hardly visible to naked eyes and is seemingly ensnared within Baltic amber. It is supposedly the tiniest arthropod fossil to be exposed to X-ray computed tomography (CT) scanning methods. Hitchhiking actions appeared to have begun only after the mite must have, at least, attained an age of 50 million years, the scientists believe.

“CT allowed us to digitally dissect the mite off the spider in order to reveal the important features on the underside of the mite required for identification. The specimen, which is extremely rare in the fossil record, is potentially the oldest record of the living family Histiostomatidae,” opined Dr David Penney, one of the study’s authors based in the Faculty of Life Sciences.

The team explained a phenomenon called phoresy where one organism uses another animal to travel to new surroundings. Such incidences seem common among varied groups presently. They accessed sub-micron contrast system to obtain high quality 3D images of this unique fossil that is inaccessible to imaging. The researchers supposedly spotted an array of vital attributes, as though a recent-day animal was being observed under a microscope.

This amber mite specimen was barely 176 micrometres long and is one of a kind, the authors say. The findings are published in the Royal Society journal, Biology Letters.

New radar system can see through walls, claims MIT team

Gregory Charvat Ever thought what lies behind those solid walls? Well, experts from Massachusetts Institute of Technology (MIT) have developed a radar technology that can seemingly see through walls from some distance, thus giving an instant image of the activity taking place on the other side.

The device is a set of antenna arranged into 2 rows with 8 receiver elements present at the top and 13 transmitting ones below. This research has useful applications in military specifically urban combat situations.

The mechanism of the instrument is such that the transmitters expel waves of certain frequency towards the target. In this instance, as the waves strike the wall, the concrete supposedly blocks almost 99% of them from going through it. Once the waves revert from the target, they ought to move through the wall again to reach the radar’s receivers. They don’t seem to achieve this target too. Till it reaches the receiver the strength of the signal weakens to about 0.0025 of its original value.

“[Signal] amplifiers are cheap. What has been difficult for through-wall radar systems is achieving the speed, resolution and range necessary to be useful in real time. If you’re in a high-risk combat situation, you don’t want one image every 20 minutes, and you don’t want to have to stand right next to a potentially dangerous building,” elaborated Gregory Charvat, technical staff at Lincoln Lab and the leader of the project.

The researchers made use of S band waves which have almost the same wavelength as wireless internet that is presumably short and hence necessitate amplifiers to make up for the signal loss. They also utilized an analog crystal filter which is deemed to take full advantage of the modulated waves coming from the wall and those that are arriving from the target.

This filter is expected to enable waves of just 30 kilohertz to pass through the receivers that possibly deletes the wall from the picture so that it does not dominate the receiver. The scientists believe that the system looks promising owing to its real-time imaging capacity. It is also touted to provide good resolution by exploiting digital processing and advanced algorithms. Though it is bulky to carry out in the field, it could be transfixed on a truck for usage.

The processor used in the instrument makes use of a subtraction technique. Also, the radar can recognize only moving subjects. No human can be on a standstill, a slight movement is always there which is sufficient enough for the device to detect the human’s location.

The team added that the system converts the signals it receives into digital video. Presently humans appear as blobs that move around in the screen from a bird’s eye perspective like the viewer standing on the wall and looking down at what’s happening.

The investigators are now working towards transforming the blob into a clear symbol that will make the mechanism more end-user friendly. They conclude that the device has been primarily designed taking military applications into consideration.

This project won the best paper title at the 2010 Tri-Services Radar Symposium.

Robot analyzes biological problems?

John Wikswo

Sir Isaac Newton gave us the law of motion after years of groundwork and research. Can something like a robotic computer identify these laws in no time? Well, experts from the Vanderbilt University and Cornell University have designed a computer that apparently works on raw biological data and mathematical equations to present the functioning of a biological system.

The researchers claim that the computer was close to finding the solution of the problem right from the beginning. The main controller of the mechanism is named as Automated Biology Explorer (ABE) by Vanderbilt physicist John Wikswo and is fueled by software called Eureqa.

This aforesaid software was developed in 2009 specifically for running robots. Eureqa was apparently able to recognize laws of the motion by analyzing the movement of the pendulum. This was accounted as its initial achievement. After the software worked successfully on robots, it was later used for solving complicated problems in science.

“Biology is the area where the gap between theory and data is growing the most rapidly. So it is the area in greatest need of automation,” commented Hod Lipson,Cornell University.

The team believed that Eureqa could be used to answer tough biological problems and have complete power over them. In the first trial, the team tested the working of the software in a biological process called glycosis. The latter contributes to the creation of energy in living beings.

In the procedure the scientists aimed at glycolytic oscillations and tested the software on this by focusing on the responses of yeast cells that lead the chemical processes. The investigators then made use of mathematical models developed by the system to make precise calculations under different instances. They also included a 10% random err in the measurement to make it look real.

This set of information was then processed into the Eureqa which comprehended it and delivered a class of equations that looked almost similar to equations known previously.

The researchers now want ABE to run like the robot Adam by producing a lab-on-chip which can be regulated by Eureqa. They now have their eyes peeled on microfluidic devices to attain the objective.

The findings are published in the journal Physical Biology.

Graphene Big Mac to replace silicon chips soon: Research

Leonid Ponomarenko Can computer chips made of silicon be replaced by graphene? Well, scientists from the University of Manchester have demonstrated how graphene chips would look like.

Experts believe that graphene being the world’s thinnest, strongest and most conductive substance, it may be a major breakthrough in material science.

Dr Leonid Ponomarenko, the leading author on the paper, opined, “Creating the multilayer structure has allowed us to isolate graphene from negative influence of the environment and control graphene’s electronic properties in a way it was impossible before.”

The team developed the graphene ’Big Mac’ by sandwiching two layers of graphene with some other two-dimensional material namely boron nitride. This 4 layered construct could be the future chip used in computers.

The 2 layers of graphene being entirely engulfed by boron nitride, this made it possible for researchers to comprehend its functioning under no influence of ambient factors. By far, people have never witnessed graphene as an insulator unless it has been impaired in any way. However, this finding shows how good-quality graphene could become an insulator for the first time.

The boron nitride plays a dual role here. It not only segregates the two layers but also helps gauge the responses of graphene when it is thoroughly enveloped by another material.

The team is constantly working to enrich the attributes of graphene. They believe graphene encapsulated within boron nitride to be the best and most advanced avenue in the realm of graphene electronics. They suggest that encapsulated graphene transistors with enhanced properties could be just some months away.

Graphene is a new two-dimensional material that can be viewed as a single layer of carbon atoms transfixed in a hexagonal lattice. Its features could lead to flexible touch screen handsets and computers, lighter aircraft, HD TV that are wallpaper thin and swift internet connectivity.

Graphene discovered by Professor Andre Geim and Professor Kostya Novoselov from the University of Manchester in 2004, won them the Nobel Prize for Physics last fall. The Chancellor of the Exchequer George Osborne has just proclaimed the plan of a £50m graphene research hub being established. The research is published in the journal, Nature Physics.

Can brain scans read through our mind?

Brain Reconstructed Video

In a scenario where recalling dreams looks impossible, can technology help us view our own dream? Well, scientists from the University of California have used a high tech combo of brain imaging and computer simulation and apparently decoded and recreated people’s active visual encounters.

Presently, the tech can recreate only those movie clips that have been already witnessed by individuals. Nevertheless, this tech could be used to regenerate clippings going on in our mind, dreams or memories.

“This is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds,” commented Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor.

Practically, this finding could help coma patients, stroke victims or those affected by neurodegenerative illnesses who cannot express their thoughts verbally. It may also help in the studies that analyze brain machine interface which may help people with cerebral palsy provide guidelines to computers through their brain.

Two team members served as the research subjects and were told to view two different Hollywood movie trailers. Using fMRI, the blood flow to the visual cortex was measured and the computer was segregated into small, three-dimensional cubes called volumetric pixels or voxels. A prototype was constructed for each voxel that showed how shape and motion data in the clipping is mapped into brain responses.

The brain activity caused by the second group of videos was used to evaluate the movie reconstruction algorithm. This was accomplished by entering 18 million seconds of various You Tube videos into the computer program so that is could indicate the brain activity each film clip will usually elicit in every participant.

Lastly, the 100 clips that looked almost identical to the footage that the subject had possibly viewed were selected by the computer program to blend them and create an indistinct but prolonged reproduction of the real movie. The tough job of reconstructing movies using brain scans was solved by using a two-stage model that individually presents the neural population and signals of the blood flow.

The findings are reported in the journal Current Biology.

Laser may detect roadside bombs

Marcos Dantus Terrorism often shows its ugly head through roadside bombs placed to take the life of unsuspecting pedestrians. In this view, scientists from the Michigan State University have apparently produced a laser that may locate roadside bombs, which is the deadliest enemy weapon found in Iraq and Afghanistan.

The laser has an output similar to a simple presentation pointer, and can possibly show responses and gauge large regions to detect advanced explosives. These seemingly account for nearly 60% deaths of coalition soldiers.

Marcos Dantus, chemistry professor and founder of BioPhotonic Solutions commented, “The detection of IEDs in the field is extremely important and challenging because the environment introduces a large number of chemical compounds that mask the select few molecules that one is trying to detect. Having molecular structure sensitivity is critical for identifying explosives and avoiding unnecessary evacuation of buildings and closing roads due to false alarms.”

IEDs are known to be found in areas with many people and so the avenues to spot them ought to be non-destructive. They must also be able to gauge the differences between these explosives and similar compounds found in urban settings. This laser is touted to be precise and can distinguish for very small amounts like a small part of a billionth of a gram.

The laser beam couples short pulses that apply force to the molecules and cause them to vibrate, with long pulses that are utilized to hear and find the various chords. These are inclusive of many vibrational frequencies that specifically recognize each molecule just like a fingerprint. A laser with high-sensitivity may work in tandem by using cameras that could enable users to check doubtful locations by keeping a safe distance.

This method was originally developed for microscopes but by widening its usability, it is deemed to work for standoff identification of explosives, concludes Dantus. The findings are published in the current issue of Applied Physics Letters.

NCSU researchers devise self-healing stress sensor

Self-Healing Stress Sensor

While scientists busy themselves with looking for ways to mimic the healing power of salamanders in humans, researchers from NCSU (North Carolina State University) have designed a self-healing stress sensor. Engineers employ sensors to assess the strain exerted on materials used to build things like aircrafts to buildings.

Such sensors are used for measuring how an airplane wing is performing in flight and deliver maintenance authorities advance alerts when the wing is detected to be near failure. The problem with this technique is that sensors tend to break under stress, thus cutting off the supply of information to users.

“To address this problem, we’ve developed a sensor that automatically repairs itself, in the event that it is broken,” cites Dr. Kara Peters, an associate professor of mechanical and aerospace engineering at NC State and co-author of a paper describing the research. “Events that can break a sensor, but don’t break the structure being monitored, are important.”

The NCSU creation is capable of stretching and compressing with the material it monitors. An infrared light wave runs through the sensor and detects the alterations in change in order to report how much strain is being felt by the material in question. The sensor contains a pair glass optical fibers which run through a reservoir filled with UV-curable resin.

“These events could be bird strikes to an airplane wing or earthquake damage to a building. Collecting data on what has happened to these structures can help us make informed decisions about what is safe and what is not. But if those sensors are broken, that data isn’t available. Hopefully, this new sensor design will help us collect this sort of data in the future,” adds Peters.

The ends of the glass fibers are aligned with each other, though divided by a small gap. Focused beams of UV and IR light pass through one of the fibers. When the UV beam gets to the resin, the resin hardens and builds a thin polymer filament which joins up the glass fibers and creates a closed circuit for the IR light. The remaining resin in the reservoir stays in the liquid state, enveloping the filament.

The paper ‘A self-repairing polymer waveguide sensor’, can be found in this month’s issue of Smart Materials And Structures.

Laser Made From Nanoparticles Reportedly Helps Form 3-D Crystals

University Of Michigan Here is a novel means to create 3-D arrays of optically induced crystals. University of Michigan physicists claim that using electric fields generated by intersecting laser beams to trap and manipulate thousands of microscopic plastic spheres helps produce 3-D arrays of optically induced crystals. This technique can be possibly employed for understanding the structure of materials of biological interest, including bacteria, viruses and proteins.

Apparently, the standard method put to use for characterizing biological molecules like proteins involves crystallizing them. The then achieved structure has be seemingly analyzed by bombarding the crystals with X-rays, a process known as X-ray crystallography. However, this method does not appear helpful in a couple of proteins including cell-membrane proteins as they cannot be crystallized at all.

“So we came up with this idea that one could use, instead of a conventional crystal, an optically induced crystal in order to get the crystallization of a sample that could be suitable for structural analysis,” said U-M physicist Georg Raithel, professor of physics and associate chair of the department.

While conducting the research, experts developed the laser technique through microscopically small plastic spheres instead of the molecules. Also 3-D optically induced crystals were formed by other researchers. In this process, shining laser beams are included that go through two opposed microscope lenses, one directly beneath the other. The two infrared laser beams are through each lens, and meet each other at a common focal point on a microscope slide that holds thousands of plastic nanoparticles suspended in a drop of water.

The electric fields developed by the intersecting laser beams allegedly vary in strength in a regular pattern that forms a 3-D grid called an optical lattice. Then the nanoparticles may get pulled into regions of high electric-field strength, and thousands of them align to form optically induced crystals. The crystals are possibly spherical in shape and about 5 microns in diameter.

The research was published online on May 31 in the journal Physical Review E.

Novel Method Of Creating Graphene Supposedly Unlocked

University of Houston Logo First crafted in 2004, graphene is a one-atom-thick layer of carbon that can help develop high-speed transistors and integrated circuits that use less energy than silicon electronics. To further enhance the ability of this carbon layer, University of Houston experts have now developed a method for creating single-crystal arrays of graphene. This advancement in technology may help scientists to come up with a replacement for silicon in high-performance computers and electronics.

With the help of these seeds, an ordered array of thousands or millions of single crystals of graphene can be possibly produced. In this research, investigators fabricated the single-crystal growth process at the UH Center for Advanced Materials (CAM). The ordered arrays can possibly serve as a potential medium to create electronic devices.

“There is still a long way to go. However, this development makes the fabrication of integrated circuits with graphene transistors possible. This may actually be the first viable integrated circuit technology based on nano-electronics,” quoted Steven Pei, UH professor of electrical and computer engineering.

The graphene seeded-growth technique was employed for growing single-crystal graphene arrays on top of a copper foil. This copper foil was inside a methane gas chamber through a process called chemical vapor deposition. It is believed that this process can be effective in producing large-area graphene films and in touch-screen displays, e-books as well as solar cells. Researchers mention that all through the investigation, they were able to govern over the growth of ordered arrays.

The research is published in the June issue of Nature Materials.

Nantenna Apparently Captures Up To 95 Percent Of Light Energy

Expert Patrick Pinhero Today’s solar panels appear less efficient as they are able to capture only 20 percent of available light. Well, a University of Missouri engineer has now designed a flexible solar sheet that can collect more than 90 percent of available light. He is also planning to develop prototypes for consumer usage in the next five years.

Energy generated via traditional photovoltaic (PV) methods of solar collection may be inefficient and neglects much of the available solar electromagnetic (sunlight) spectrum. However, the newly introduced device dubbed as nantenna is a thin, moldable sheet of small antennas that can harvest the heat from industrial processes and convert it into usable electricity. Attempts are being made to convert this concept into a direct solar facing nantenna device for gathering solar irradiation in the near infrared and optical regions of the solar spectrum.

“Our overall goal is to collect and utilize as much solar energy as is theoretically possible and bring it to the commercial market in an inexpensive package that is accessible to everyone,” Patrick Pinhero, an associate professor in the MU Chemical Engineering Department, stated. “If successful, this product will put us orders of magnitudes ahead of the current solar energy technologies we have available to us today.”

Scientists have laid hands on a way to extract electricity from the collected heat and sunlight by using special high-speed electrical circuitry. The energy-harvesting device has been sketched out for existing industrial infrastructure, including heat-process factories and solar farms. Within five years, the world will probably have a product that complements conventional PV solar panels. Since the nantenna device is a flexible film, it can be seemingly incorporated into roof shingle products, or be custom-made to power vehicles.

The research was published in the Journal of Solar Energy Engineering.

Novel Sensor Reportedly Identifies Tiny Traces Of Explosives

Sensor Carbon Nanotubes Ion mobility spectrometers are not only affordable, but also reliable means to detect explosives. Yet the next generation of nanosensors seems to hold potential to detect even single molecules of explosives at room temperature and atmospheric pressure. With a highly innovative approach, MIT researchers have now crafted a unique sensor which picks up a single molecule of an explosive like TNT.

In order to create the sensors, carbon nanotubes were coated with protein fragments called bombolitins normally found in bee venom. The nanotubes were hollow with one-atom-thick cylinders made of pure carbon. The proteins used in this research are possibly able to react to explosives, specifically a class known as nitro-aromatic compounds that includes TNT. Once these sensors are developed as commercial devices, it may be more sensitive than present day explosives detectors.

“Compounds such as TNT decompose in the environment, creating other molecule types, and those derivatives could also be identified with this type of sensor,” remarked Michael Strano. “Because molecules in the environment are constantly changing into other chemicals, we need sensor platforms that can detect the entire network and classes of chemicals, instead of just one type.”

Every nanotube-peptide combination probably reacts differently to different nitro-aromatic compounds. Through several different nanotubes coated in different bombolitins, investigators can seemingly recognize a new ‘fingerprint’ for each explosive they might want to detect. The nanotubes purportedly sense the breakdown products of such explosives also.

“It doesn’t mean that we are ready to put these onto a subway and detect explosives immediately. But it does mean that now the sensor itself is no longer the bottleneck,” Strano explained. “If there’s one molecule in a sample, and if you can get it to the sensor, you can now detect and quantify it.”

The nanotubes were claimed to identify two pesticides that are nitro-aromatic compounds, which make them potentially useful as environmental sensors. The novel technology has already drawn commercial and military interest, mainly because sensors are widely used today.

The research is published online in the Proceedings of the National Academy of Sciences.