Showing posts with label Technological-Innovation. Show all posts
Showing posts with label Technological-Innovation. Show all posts
Remember Thought Factory by Axis? Yes, it was their innovation lab for startups limited to the FinTech domain. For others, they now have Axis Start-up Social.


As the name suggests, the new platform is for networking and socializing, thus sharing knowledge and helping other potential startups. It went live on December 15, at an event held at WeWork, a co-working space in Bengaluru.

We at Today's Trendy were invited to attend the launch event, which unfolded pleasantly that Friday evening. The founders of Licious and Divrt were there to share their success stories, along with the struggles they had to face, both generally and specifically.

First on the podium was Vivek Gupta, the CEO of Licious. It is a meat delivery service currently operating in Bengaluru, Hyderabad and Delhi NCR. "We are not a food-delivery service," he kept on insisting. He shared his story about how he ended up branding meat. He admitted to knowing how the funding-side of the startup ecosystem works, thanks to his previous stint at a VC firm.


Next up was Amit Rohatgi of Divrt. The startup aims at providing a comfortable and data-driven parking solution to the most populated cities in the world. He talked about how the startup ecosystem is different in Silicon Valley than it is in India. He is appalled by the enormous sums of money startups in the sub-continent offer tech talents. He is also unsure why developers in India settle for a high CTC number instead of working for shares, which are expected to grow manifold over the years.
Informative Q&A sessions followed each of the above talks. Vivek explained the importance of collecting data about the customers, sales, and almost every other department. The next logical step is then to draw conclusions from the same and strategically plan the next move.

If I may say, the idea will be a huge success. One can learn a lot from past mistakes, but can equally learn from the mistakes of others. It is always good to know what not to do. These talks can be a real game changer for the newbies in the field.

To lighten the mood was arranged a stand-up comedy act by Vikram Poddar, who is an investment banker turned corporate standup comedian. The evening ended with networking opportunities over snacks.

"We are excited to launch Axis Start-up Social, a platform through which we hope to provide the much required, ‘extra edge’ to the start-up community by handholding them, sharing knowledge and providing the required financial solutions. Today, the Indian ecosystem is flooded with innovative ideas but what is missing is the presence of the right channel and guidance in terms of acceleration, scaling up and funding. Through Axis Start-up Social, we endeavour to create an ecosystem to encourage innovation and the next-level-of-growth opportunities to start-ups that are ready to take that leap,”
said Sidharth Rath from Axis Bank.

This is a sponsored post.
Nvidia Titan V, the latest offering from the American technology company, is the world's most powerful GPU ever created for Personal Computers. The product targets researchers and scientists. It was announced at the annual NIPS conference by founder and CEO, Jensen Huang.

Nvidia Titan V

There are 21.1 billion transistors used. New Tensor Cores deliver 110 teraflops and are designed for deep learning and scientific computations and simulations.

The Nvidia Titan V promises to turn a PC into an Artificial Intelligence powered supercomputer. The new GPU has nine times the raw horsepower of its predecessor. It is built on their proprietary Volta GPU architecture.

The new GPU is twice as energy efficient as that of the precious generation Pascal design, which is due to a significant redesign of the multiprocessor at the center of the Graphics Processing Unit. There is also a boost in performance within the same power usage as that of its predecessor.

Nvidia Titan V is priced at $2,999.

Watch the introductory video below:

Scientists at the University of California San Diego have created a smartphone case to measure the level of glucose in the blood. It is being developed by a team led by professors Patrick Mercier and Joseph Wang.

Reusable sensor at the corner of GPhone

How does it look?


The 3D-printed case is named the GPhone. It draws its power from the phone's battery. There is a reusable sensor at one of the corners. This sensor is connected to a printed circuit board. The removable stylus on the side is filled with pellets. One pellet is dispensed at a time, and each pellet is for one-time use only. The product is still in the prototype phase.

How does it work?


One has to fetch a pellet from the stylus and place it on the sensor. A magnet holds it in place, and the sensor is powered up.

The next step involves placing a sample of blood on the pellet. Glucose oxidase, an enzyme present in the pellet, reacts with the glucose in the blood and generates an electric signal. The signal's strength is directly proportional to the glucose concentration in the blood. The electrodes of the sensor measure the signal and transmit the data to the smartphone via Bluetooth. The data is then presented to the user as a number on an application built for the purpose.

The sesnor on the GPhone is connected to a PCB

Data collected over time can be used to track progress, or interpreted in other meaningful ways.
Each test run takes around 20 seconds. Once done, the pellet is thrown away, which deactivates the sensor.

Challenges at hand


1. Cost of the pellets is more than the paper strips used in conventional test kits.
2. Each test run needs at least 12 drops of blood! Research is underway to reduce it to the bare minimum.

Source: University of California San Diego
The specifications of HDMI 2.1 are released by the HDMI Forum's Technical Working Group. As expected, few of the features will be backward compatible with HDMI 2.0 cable, while the rest will need the upgraded Ultra High Speed HDMI cable.

Display resolution comparison along with the new Ultra High Speed HDMI cable

HDMI 2.1 supports super high bandwidth content like 4K resolutions at 120 Hz refresh rate (double the speed than its predecessor) and 8K at 60 Hz. The maximum supported resolution is up to 10K at 120 Hz, but it is delivered after being processed by a Stream Compression algorithm. The total transfer rate is increased from 18 Gbps to 48 Gbps.

The new standard also supports Dynamic HDR for specific resolutions and bandwidth. For the uninitiated, Dynamic HDR is nothing but a fancy term for dynamic metadata that allows for changes on a frame-by-frame basis. In simple words, the technology can display a broader range of colors, and can also adjust individual frame for optimal brightness, contrast, depth and other details.

The 1 Mbps bi-directional data channel used for Audio is replaced with a 37 Mbps one, thus increasing its capacity manifold. As a result, uncompressed 5.1, 7.1 and high bitrate streams like Dolby Atmos and TrueHD are now allowed.

Variable Refresh Rate (VRR)

Variable Refresh Rate (VRR) is aimed at gamers as it constantly updates the refresh rate of the screen to synchronize with the video output, resulting in improved smoothness and reduction in lag or any frame drops and tears. It is combined with Quick Frame Transport (QFT) which reduces the latency between the output and the screen.

Quick Media Switching (QMS) removes the blank screen wait period when switching across HDMI devices.

Bandwidth comparison across various HDMI versions

The new Ultra High Speed HDMI cable will be required for the high bandwidth stuff. It is compatible with type A, C and D connectors. It supports HDMI Ethernet channel. The new cable is backward compatible with previous devices. It is expected to be available for purchase early next year.

Currently, virtual reality experiences are limited to the person wearing the headset or holding the remote, making him/her isolated from the people around. Disney Magic Bench is a mixed reality prototype designed to make augmented reality experience a group activity by eliminating such hardware equipment.

Disney Magic Bench

One can just walk over to the bench and take a seat. The system senses the number of people and triggers various scenarios accordingly.

The system allows people to interact with virtual characters in a 3D space. A video screen in the front presents the scene, which is a result of algorithmically combining feeds captured by an RGB camera and a depth-sensor.

Haptic feedback from the actuators present beneath the Magic Bench makes the animated characters feel real. A thump when an elephant sits down, or a vibration when it snores, makes it all seem as if it is physically present.

Even though the product is in its research phase, the future of AR looks promising. Disney is showing the Magic Bench at SIGGRAPH in Los Angeles next week.

Watch the immersive experience in the video below:



Source: EurekAlert
Most of us know Toyota as a car manufacturer. What we may not know that they are also into robotics and automation. In 2005, the Japanese multinational announced a project called "Partner Robot." It was the beginning of the switch from industrial robots (which they were making since the 1970s) to household machines. Recently, they have reached a significant milestone in the project with a successful in-home trial of HSR (Human Support Robot) in a war veteran's home in North America. The bot helps people with disabilities carry out their day-to-day activities independently. It was first introduced in 2012.

Toyota Human Support Robot

A tablet interface is provided to interact with HSR. It can be commanded to open doors and fetch water bottles.

The in-home trial is done with Romulo Camargo aka Romy. He is a decorated US war veteran. He was posted in Afghanistan and is paralyzed from the neck down.

"When they opened the box, and I saw the robot, I figured we would unfold the next chapter in human support robots helping people with disabilities – like this research is going to change the world," says Romy.

HSR can be commanded to open doors and fetch water bottles

Watch the video below to see HSR in action:

Automation is the future. Robots are a means to achieve it. There are robots made of metals. However, there are applications which need something softer. That is where soft robots come into the picture. For the uninitiated, this sub-field deals with 'constructing machines from highly compliant materials, similar to those found in living organisms.' Harvard has taken a step further and developed robots from drinking straws. Needless to say, they are inspired by insects.

Soft robots from Harvard are made from drinking straws

The semi-soft robot developed by the team is capable of standing and walking. There is also a robotic water strider which can move along the surface of the liquid.

“If you look around the world, there are a lot of things, like spiders and insects, that are very agile,” said George Whitesides, the lead researcher. “They can move rapidly, climb on various items, and are able to do things that large, hard robots can’t do because of their weight and form factor. They are among the most versatile organisms on the planet. The question was, how can we build something like that?”

They began by making the plastic straws bendable by cutting a notch in them. In the second step, short tubes were inserted into those notches. Rubber strips were attached on either side to act as tendons. As a result, the inflated tubes extended the joints, and in deflated state the rubber retracted them, thus creating a moving mechanism.

The above resulted in a simple one-legged robot capable of crawling. The team then began to increase the complexity by adding more legs. Two legs made the bot capable of pushing and pulling itself, and yet another leg resulted in a robot capable of standing on its own, just like a tripod. At the sixth level, the team as able to achieve a gait similar to that of an ant. By the time they reached to eight levels of complexity, making the bots walk became a challenge from the programming perspective. An Arduino microcontroller was deployed for this purpose.

“A spider has the ability to modulate the speed at which it extends and contracts its joints to carefully time which limbs are moving forward or backward at any moment,” said Alex Nemiroski, the co-author. “But in our case, the joints’ motion is binary due to the simplicity of our valving system. Either you switch the valve to the pressure source to inflate the balloon in the joint, and thus extend the limb, or you switch the valve to atmosphere to deflate the joint and thus retract the limb. So in the case of the eight-legged robot, we had to develop our own gait compatible with the binary motion of our joints. I’m sure it’s not a brand-new gait, but we could not duplicate precisely how a spider moves for this robot.”

Even though all this sounds like a DIY art-and-craft project, but since the prototype works as a proof of concept, future research can be based on developing such robots using lightweight structural polymers. This might open possibilities of using them in search and rescue operations during natural disasters and in conflict zones.

Watch the robots in action below:

Nvidia has unveiled their intelligent video analytics platform, the Nvidia Metropolis. It brings them a step closer to the envisioned Artificial Intelligence enabled smart cities of the future. The platform is a combination of various Nvidia products operating on a unified architecture.

Nvidia Metropolis is the next step towards smart cities of the future

The idea is to apply deep learning techniques to the real-time video streams generated by the cameras, and then use that information in the areas of public safety, resource optimization, and traffic management, to name a few.

The current state is such that most of the raw video is just stored on disks, and not processed instantly as it involves a massive workforce expenditure. The quick analysis methods will be able to process all that data on a large scale and with higher accuracy.

Nvidia Metropolis is a combination of various Nvidia products operating on a unified architecture

Nvidia is hoping there will be around a billion cameras in the public domain (commercial buildings, government property, public transit, and roadways) by the year 2020, thus giving their platform enough raw data to process and improve its behavior.

Nvidia has partnered up with various companies (Avigilon, Dahua, Hanwha Techwin, Hikvision, Milestone, and more) to build products and applications around the Metropolis platform. Few such ideas will be on display at the GPU Technology Conference happening this week.
We are well aware of what heat does to an electronic device, and how different devices use various cooling mechanisms (fan, water, and so forth) to prevent them from overheating or shutting down altogether. However, can the same dreaded heat be used as an alternative source of energy to power up those devices? Engineers at the University of Nebraska-Lincoln seem to think so!

A thermal diode

They have developed a thermal diode to achieve the same. As the name suggests, it runs on heat (as opposed to the traditional ones that use electricity). For the uninitiated, a diode is a logic component in electronic circuits that allows electricity to flow freely in one direction while blocking it from flowing the opposite way.

“If you think about it, whatever you do with electricity you should (also) be able to do with heat because they are similar in many ways,” said Sidy Ndao, co-author of the study. “In principle, they are both energy carriers. If you could control heat, you could use it to do computing and avoid the problem of overheating.”

In the paper published by the team, they have claimed that the diodes are working as expected for temperatures up to 630 degrees Fahrenheit (332 degrees Celsius approx.). They expect it to work at temperatures as extreme as 1300 degrees F (704 degrees C), which can then be deployed in computers that can operate in extreme heat conditions.

“We are basically creating a thermal computer,” Ndao said. “It could be used in space exploration, for exploring the core of the earth, for oil drilling, (for) many applications. It could allow us to do calculations and process data in real time in places where we haven’t been able to do so before.”

Engineers at the University of Nebraska-Lincoln at work

If all goes well, this will result in improved energy efficiency as the heat energy that gets lost now can then be reused to power the device.

“It is said now that nearly 60 percent of the energy produced for consumption in the United States is wasted in heat,” Ndao said. “If you could harness this heat and use it for energy in these devices, you could obviously cut down on waste and the cost of energy.”

“If we can achieve high efficiency, show that we can do computations and run a logic system experimentally, then we can have a proof-of-concept,” said Mahmoud Elzouka, co-author of the study. “(That) is when we can think about the future.”

Since diodes are not the only component present in an electronic device, this can be considered as the first step towards achieving a thermal computer. Scientists need to find ways for other elements to operate at higher levels of mercury.
Smartphone manufacturers these days boast of making the thinnest devices to stand out of the crowd and offer something unique in the over-saturated market. The thickness of the camera lens used plays a significant role in achieving the same. What if there was a camera module that was just a couple of millimeters in thickness? Our phones would then be thinner than we can now imagine! Scientists at the Fraunhofer Institute for Applied Optics and Precision Engineering in Germany have achieved just that! The same was showcased at the Consumer Electronics Show held in Las Vegas earlier this month.

Facet Vision camera mounted on the ribbon cable

Aptly termed the Facet Vision, the technology is inspired by insects and their multi-faceted eyes. Instead of using a single lens, the camera is composed of 135 uniform lenses. They are placed close to each other forming a mosaic. Each one captures a part of the subject, which is later combined to yield the complete picture.

With the currently available technology, it is expected to be used in robot technology, automobile industry, medical engineering, and printing industry among others.

The camera is just 2 mm thick - a huge difference from the usual 5 mm lenses used in smartphones. Currently, the scientists have been able to achieve up to 4 MP resolution using this technology. However, further research suggests that the number can go up to 10 megapixels, making it a suitable choice for smartphones.
Lithium-ion batteries are the ones used in consumer electronic items. Their potential to retain the charge for a longer duration has made them the industry-wide standard. In recent years, lithium-oxygen batteries have surfaced as a possible successor. However, due to several limitations, their practical usage in commercial products is not possible yet.

Heme molecule could be the reason for better rechargeable batteries

One such obstacle is the formation of lithium peroxide. For the unaware, it is a solid precipitate that is chemically formed during the process and covers the surface of the electrodes, thus slowing (and ultimately halting) the flow of ions. The idea is to find and use a catalyst that will result in the decomposition of those harmful peroxide molecules into lithium ions and oxygen gas.

The researchers at Yale laboratory have identified a molecule named Heme to act as the catalyst in the process in an environment-friendly manner. It is the same molecule that makes one of the two parts of hemoglobin, the protein molecule responsible for transporting oxygen in the blood. Since Heme has a good binding with oxygen, it is a perfect candidate for the job.

It has been successfully demonstrated that when used as the catalyst, the Heme molecule improved the lithium-oxygen cell's function by reducing the energy needed to improve the charge/discharge cycle time of the battery.

Moreover, this will result in the reduction of animal waste disposal since this biomolecule is traditionally just a waste product in the animal products industry.
Most of us use memory cards with our smartphones and digital cameras to make sure we have that extra storage when it is needed the most. At Photokina 2014, SanDisk introduced a 512 GB memory card. Taking this a step further, they have now revealed a prototype of the 1 TB SDXC card at the same event this year held in Cologne, Germany.

SanDisk 1 TB SDXC card prototype

Going a few years down the memory lane, the first SanDisk 64 MB SD card was introduced sixteen years ago. (link)

Increasing demand for 4K and 8K video content, and applications employing Virtual Reality, 360-degree video solutions, and 24x7 video surveillance systems are perfect candidates that expect more-and-more storage in the devices they run on. This 1 TB card should fit their purpose.

“Just a few short years ago the idea of a 1 TB capacity point in an SD card seemed so futuristic – it is amazing that we are now at the point where it is becoming a reality. With the growing demand for applications like VR, we can certainly use 1 TB when we are out shooting continuous high-quality video. High-capacity cards allow us to capture more without interruption, streamlining our workflow, and eliminating the worry that we may miss a moment because we have to stop to swap out cards,”

said Sam Nicholson, CEO of Stargate Studios and member of the American Society of Cinematographers.

SanDisk 1 TB SDXC card prototype

Since this is just a prototype, price and availability dates for the product are not known. However, the price can be speculated from the fact that the 512 GB card retailed at $799 in the beginning. The figure later fell to $599.
Startup scenario is booming across the world. India is not untouched from the same. Every other day, numerous startups spawn from garages and bedrooms of thinkers. Some fail to kick off, while others gather massive funding and try to make a dent in the startup ecosystem. Seeing all this, Indian leg of Axis Bank has come up with an innovative idea to nurture and house a few such startups. They are calling it the Axis Thought Factory.

Axis Bank Thought Factory

Needless to say, all or most of the selected ideas pertain to the finance domain. Moreover, since a startup these days tries to solve a problem using computing and available technology, all this ends up in an amalgamation of finance and technology to address real-world problems. Hence the term FinTech. So, if you have an idea and are thinking of enrolling in the program (details on how to do that later in the article), you have a better shot if it relates to finance. If not, you can still give it a try as there is no such limitation on paper. It is just that the odds are higher in the former case.

Before I go into the details, it is important to note that it is yet another incubator, but with specialization in finance domain. Their specialization pertains to the fact that people working for Axis Bank (read having years of experience in the area) will mentor the to-be-entrepreneurs on anything financial.

How does it work?


The entire concept is divided into three steps. In the in-house incubation step, various ideas and technologies are discussed. In the accelerator step, workspace, mentorship, and tools are provided to improve and realize those ideas. In the final step, social engagements are arranged wherein products and further concepts are presented to potential investors (Axis Bank is one them).


Why should a startup consider enrolling?


There are various convincing reasons for the same:
  1. A chance to display your idea to the world
  2. It is always better to have another helping mind
  3. Support and mentorship from industry experts
  4. Help in getting funded. 
  5. Invaluable experience and lot to learn

How can one enroll?


One of the following will work:
  1. Participate in road shows conducted in various cities across the country
  2. Look out for hackathons being arranged under "Hack for Hire" label
  3. Just drop the team an email and wish you get lucky
For details like email address and exact locations, refer to the official website.

Good luck.
Yes, you read it right. Thanks to LG, this may soon become a reality. LG Innotek, a subsidiary of LG, has successfully developed such a design wherein the fingerprint sensor is placed below the display.

LG has put the fingerprint sensor under the display

The way fingerprint modules are placed in current smartphones, they are prone to scratches as the users are required to make physical contact with them to authenticate. With the new design, this problem can be easily solved. The display already has a protective covering (like Gorilla or Dragontrail glass), and the fingerprint sensor can sit under it unharmed. Also, the OEM’s need not worry about the fingerprint module when making IP67 certified (waterproof) devices.

To house the sensor, a 0.3 mm thick cut is made on the lower backside of the glass. The sensor is then attached to the display with the help of an adhesive.

If we talk about accuracy, this design is comparably accurate to the ones available today. The probability that an unauthorized person will gain access to the device (via the fingerprint sensor, of course) is as low as 0.002%.

Effectively, all this means that we will be able to unlock the device by just placing our fingers on the display. Even though this sounds interesting, it is just the beginning. However, the future looks promising. We’ll have to wait and see how the events unfold.

As far as their commercial availability is concerned, there is no news yet. But given the increasing demand of a fingerprint sensor in portable gadgets and the rise of digital payments, the day is not too far.

Stay tuned for follow-up stories regarding the same.
Oppo has today presented an exciting advance in smartphone user experience with its SmartSensor image stabilization and Super VOOC Flash Charge technologies at Mobile World Congress 2016.

Oppo Super VOOC Flash Charge can charge your phone in just 15 minutes

Super VOOC Flash Charge is an improvement on the original VOOC Flash Charge technology. The VOOC Flash Charge gave users 2 hours of talk after just 5 minutes of charging. Super VOOC takes it a step further, offering a full 10 hours of talk time in the same charging time, and filling a 2,500 mAh battery in only 15 minutes!

Super VOOC Flash Charge uses a 5 V low-voltage pulse-charge algorithm, which dynamically regulates the current to charge the phone in the shortest time possible as of today (we expect more such innovations in the future). The all-new algorithm pairs with a customized super battery, as well as a new adapter, cable and connector made using premium, military-grade materials. Super VOOC Flash Charge supports micro USB or Type-C interfaces.
One thing that can annoy anyone is a bulky smartphone or a camera. Although our gadgets have lost considerable weight over the years, yet there is room for improvement. What if we told you a paper-thin camera lens can replace today's bulky lenses, in return making the devices slimmer? All thanks to the engineers at the University of Utah, this might soon be a reality!

Prototype of the paper-thin lens that might soon be seen on your smartphone or camera

The engineers have developed a method of creating flat and thin optical lenses which can bend light to a single point in space. This is how the traditional camera lenses work, but with all their bulks and curves. Not anymore!

As you are well aware, light is composed of several colors. Also, if you are familiar with the concept of refraction (remember the under-water bent pencil school experiment?), you might know that all those components need to pass through the lens and converge on a single point on the camera sensor in order to capture a photograph. But since each color has different properties and hence, bends differently, various lenses of several curvatures were required to converge all of them to a single point. But not anymore!

Using the principle of diffraction, the engineers have developed a super-achromatic lens ten times thinner than the width of a human hair! The light (and its colored components) interacts with the micro-structures present in the lens, and bends, finally converging at a single point. Transparent materials like glass or plastic are the major material candidates with which this lens can be created.

This thin lens is still under prototyping, and we expect to see its commercial applications in the coming years.
Intel's dominance in the world of processors is well known. I'm sure, at least once in your life, you would have definitely used a computer powered by Intel. But what about smartphones? Well, their recently launched chipsets power your smartphones as well. Read on to know more.


Intel will now be in your smartphone as well

The manufacture has launched a range of 6th generation processors, specially suited for mobile devices. The all-new Y-series, U-series, and H-series processors deliver a new class of computing with a host of new features to power the next generation of two-in-ones, notebooks, and other small form factor or mobile devices.

The family is comprised of processors that deliver a leap in performance and power efficiency, provide stunning visuals, and enable amazing user experiences when paired with Windows 10, the latest OS in town.

Y-series processors


This series includes the Core i7, Core i5 and Core i3 processors. Each of these is a dual core chipset, with double the effective processing threads (again thanks to Hyper Threading Technology).

U-series processors


This series includes the Core m7, Core m5 and Core m3 processors. Each of these is a dual core chipset, with double the effective processing threads (thanks to Hyper Threading Technology).

H-series processors


This series includes the Core i7, Core i5 and Core i3 processors. The first two are quad cores (with 8 effective processing threads). The Core i3 supports only two cores (4 threads).

Features of the new processors


A lot of useful features are provided. A few interesting ones are mentioned:

The new processors feature version two of the Turbo Boost Technology, wherein the frequency is dynamically modified as and when needed.

They also support the Hyper Threading Technology. This means that there are two processing thread for each physical core present on the chip. Hence, developers can write multi-threaded applications to efficiently utilize the cores and their extra threads, thus getting more work done simultaneously.

Then there is the Smart Cache. The total shared cache memory is allocated to each core at run-time, depending upon the need. This reduces the number of effective cache misses, in turn improving the performance manifolds.

They also offer improved security. In order to generate random keys for encryption/decryption, a special hardware-based number generator is used.

In order to deliver the most out of the battery, Collaborative Processor Performance Control is deployed. It reduces active power to deliver improved battery life.

Smart Response Technology is used to reduce the waiting time, and allow the user to access files and applications with greater speed.

As far as applications are concerned, at least on paper, there is no doubt that these beasts can run any large or small app available as of today. Though, we'll have to wait and watch how they really perform.

With all the amazing features claiming to improve the productivity and efficiency, I am sure Intel will make a mark in the world of mobile devices as well. After all, it is good to have Intel inside! One concern which I have is the absence of any hexa or octa core processor from Intel. Since Qualcomm already has such chipsets into the market, and they are already in use by the smartphone OEMs, I think Intel needs to soon join the bandwagon in order to compete.
A student at the University of London has developed a wearable glove which aims to give voice to the hearing and speech impaired people. The idea is to analyze the sign language made from hand gestures (by the person wearing the glove) and convert it to visual text and audible dialogue. The text can then be displayed on a screen, and the dialogue can be played on a music system or a smartphone.

SignLanguageGlove

A related app is also being developed. It will translate the text in the language of reader's choice in real-time.

The developer's vision was to improve communication between persons with disabilities. This is the outcome of that vision.

How does it work?


Internal circuitry of the SignLanguageGlove

The electronic circuit on the glove is made up of five flex sensors, an accelerometer, a microcontroller board, and a four digit graphic numerical display.

Each flex sensor corresponds to one finger of the hand. It is used for detecting bends and curvatures, and then reporting the values to a serial monitor. The attached accelerometer detects the orientation of the hand.

All the hardware is controlled by a software that identifies the output values of the sensors and the accelerometer, and matches them with a series of statements that determine what letters to display on the screen. This was the first prototype.


The second version was faster and more robust, and featured compact hardware. The displayed text was also scrollable.

The third prototype features a text-to-speech chip.

The device has now grabbed attention of various companies who want to put it into production. It is expected to cost somewhere around $385.
While using GPS on smartphones is a common way to find your destination, it still is annoying when you just can't figure out the direction instantly, due to improper alignment of the device. Worry no more, Animotus is here.

3D printed cube that changes shape to show directions

It is a wirelessly connected, 3D printed cube, which is based on haptic technology. But unlike other haptic devices out there, it uses movements, instead of vibrations or sound-based feedback, to alert the user. The shape of the device is determined by the user's position with respect to the destination.

How does it work?


This is how Animotus works

The upper half of the cube rotates to let the users know the direction they need to follow. Not just that, it also extends forward to indicate the distance required to reach the destination. Thus, one can feel the changing shape and be informed where to go. No more looking at the GPS device every now-and-then.

See the technology in action in the video below:




The applications are endless. It can be, for instance, integrated with Google Maps, and the visually-impaired will have no problem finding their way around the city.

Source: Yale
Smartphone technology is improving day-by-day. There is a list of things your smartphone can already monitor, viz. light, movement, temperature, pressure and geographical location. What if it could also sense gas, and determine the various gases present? Well, the VTT Technical Research Centre of Finland claims to develop such a sensor!


Smartphone sensor that can sense gas


The miniature gas sensor, which can be connected to mobile devices, is actually based on Fabry–Pérot interferometer, which, in common terms, is an optical filter. This means that the sensor is based on filtering light samples.

The technology works by throwing light of varied wavelengths across a sample of air. The property of different gases to absorb lights of different wavelengths comes to the rescue. Thus, by analyzing the quantity and type of light absorbed by the air sample, the gaseous components present in the sample can be determined.

The possibilities are endless. By monitoring the carbon dioxide concentration in the surrounding air, the air quality can be measured. By analyzing a person's exhalations while asleep, the sleep quality can be determined (can be applied to wearables as well?). Various other healthcare applications are possible.

Source: VTT Research