Thursday, December 13, 2012

Father-daughter duo have the world’s first brain-to-brain ‘telepathic’ conversation

Taken from: http://www.extremetech.com/extreme/143148-father-daughter-duo-have-the-worlds-first-brain-to-brain-telepathic-conversation

It should be fairly obvious why, all technological considerations aside, there has been much more research into letting machines extract our thoughts, rather than insert them. Mind reading is a scary-enough concept all on its own — but mind writing? It calls to mind the hacker deities of cyber punk novels; skinny, trench-swathed Neos projecting e-thoughts into the skulls of passing civilians. With such basic issues of privacy on the line, it took the trusting relationship between UK scientist Christopher James and his adventurous young daughter to give us our first stab at developing real telepathic, brain-to-brain communication technology.
James’ process of telepathic communication is rough, its results shaky, but the principle of brain-to-brain (B2B) communication is unquestionably met. It begins with the by-now standard collection of mental information, achieved in this case with electrodes placed against the skull. “I only used scalp electrodes on my daughter, since my wife wouldn’t let me drill holes in my daughter’s head,” James told the Times of India.
In the experiment, the sender imagined a series of binary digits, broadcasting their choices by imagining movement in their right arm or their left. The resulting patterns of brain activity were recorded and expressed by an LED — one frequency to represent a one, another to represent a zero. The patterns are simply too arcane to be useful to the conscious mind, too quick and complex, but they’re not meant to be read like Morse code, in any case.
Dr. James conducting a preceding experiment in 2009.
When the LED signal travels to the recipient, it flashes into a very specific part of the eye (which part doesn’t matter much) and so the resulting optical signal is sent to a predictable section of the visual cortex. Surface electrodes just like those that originally recorded the signal are much better than people at making sense of the quick-flash LED language, seeing in the recipient’s brain more data than does the recipient themselves.
Once the pattern has been reverse-engineered from LED back to arm-waving, the telepathic process is said to have concluded. “The key idea to grasp,” said Dr. James, “is that a person’s eyes cannot distinguish between the different frequencies of flashing lights but a part of his brain, [the] visual cortex, can.” For more serious results, the electrodes would have to be implanted on the surface of the brain, a procedure for which he had neither governmental nor spousal approval.
All in all, this advance will take some time to spawn any dystopian mind flayers or Inception-style dreamscapes. This advance has to do with the translation of thought to binary data, and the ability to technically induce that data in the brain of another person. The glaringly absent piece of the puzzle is any ability to induce much more sophisticated visual images; multi-pixel messages that appear in the mind’s eye, as opposed to the physical one.
That sort of sophistication could come through a better understanding of just how stimulation of the visual cortex influences images in the mind, or in teaching brains the language of light bulbs. With LED technology now finding its way into contact lenses, this technology seems well-suited to the (possibly) upcoming brain-machine revolution. It’s unclear was uses this tech might find in such a future, especially when it steps beyond the constraints of fatherly affection.

Microsoft sells 40 million Windows 8 copies in the first month, defying skeptical expectations

Taken from: http://www.extremetech.com/computing/141667-microsoft-sells-40-million-windows-8-licenses-in-the-first-month-defying-skeptical-expectations

In surprising but wholly welcome news, Microsoft has announced that, since its release one month ago, it has sold 40 million Windows 8 licenses — roughly the same number of Windows 7 licenses sold in the same period three years ago. Furthermore, in terms of upgrades, Windows 8 is “outpacing” Windows 7′s first month.
Before you hang primary-colored rectilinear bunting everywhere and warmly welcome our new Metro overlords, however, we have to drill a little further into these figures. As before, with the news that Microsoft sold four million copies of Windows 8 in its opening weekend, we still don’t know how many of those 40 million licenses are actually installed. There is the distinct possibility that many of those licenses are still sitting on retailers’ shelves.
It’s also important to note that Windows 8 is being deeply discounted at launch — much more so than Windows 7. It’s possible that people are ponying up for Windows 8 while it only costs $40, but waiting to see how the cross-paradigmatic Metro/Desktop train wreck plays out before actually installing. We also don’t yet know the impact of Microsoft’s accidental giveaway of free Windows 8 Pro license keys, though presumably these freebies aren’t being factored into the 40 million.
Windows 8 Metro Start screen Charms
Despite our hesitant hedging, though, it’s clear that Windows 8 hasn’t been a complete flop — in fact, so far, it has been rather successful. Without further info from Microsoft, we don’t know why Windows 8 has been a success — but seemingly that’s just a cross we’ll have to bear until Microsoft feels slightly more comfortable. Are Windows 8 desktops flying off the shelves? Tablets? Or are the 40 million licenses predominantly upgrades from XP, Vista, and 7? Who knows. For what it’s worth, Microsoft still hasn’t released sales figures for its own Surface tablet.
In other news, Tami Reller, the Windows division’s CFO and CMO, shared some interesting tidbits on a call with industry and financial analysts. Microsoft’s early telemetry shows that 90% of users find the Charms bar on their first day, 85% open the Desktop, and 50% visit the Windows Store — where, apparently, some apps have already been downloaded one million times. Maybe that video of an old guy navigating Windows 8 for the first time (embedded below) was just a tad on the hyperbolically skeptical side.

Nokia’s Asha dumbphones are a cheap way for developing countries to stay connected

Taken from: http://www.extremetech.com/mobile/141464-nokias-asha-dumbphones-are-a-cheap-way-for-developing-countries-to-stay-connected

With the smartphone market growing quickly, it’s easy to forget about feature phones or dumbphones. A huge number of people still won’t or can’t drop hundreds of dollars on a device on top of an expensive data contract. This is especially true in smaller and emerging markets. Yesterday, Nokia announced two new dumbphones that are quite compelling in a number of ways, but won’t be available in the United States.
The Asha 205 (pictured above) and 206 (below) are both slated to be released by the end of the year for a mere $62 a pop, but only in markets not well represented with Nokia’s Windows Phone offerings. The Asha 205 features a landscape 2.4-inch screen, a physical QWERTY keyboard, Bluetooth 2.1 (EDR), and a rear-facing VGA camera. The Asha 206 features a portrait 2.4-inch screen, a standard numerical keypad, Bluetooth 2.1 (EDR), and a 1.3 megapixel rear-facing camera. Facebook and Twitter are integrated heavily on these devices, but that will be slow-going with only 2G capability. Considering its target markets, however, this doesn’t seem like such a hindrance.
Asha 206 Phones
We’re currently in a strange place with the cellphone market. It’s clear that eventually all phones will be smart, but the devices and data plans are just too expensive for a segment of the world. During this interim period, it’s brilliant for companies like Nokia to offer lower-end devices for people on a budget who still want to be able to stay connected with their friends. They won’t be playing Letterpress or editing a Word document on the go, but these Asha phones have a lot of potential for markets like India with a growing number of middle class people wanting to stay connected.
While these phones aren’t meant to compete with iPhones or Android devices at all, they do serve a purpose. In fact, they have a rather ingenious feature that takes advantage of local sharing instead of relying on the cell networks. Instead of pairing two devices together, an owner of one of these Asha phones can choose to use Nokia’s “Slam” to send a picture to whatever the nearest Bluetooth-enabled device is. It works over the standard Bluetooth 2.1 EDR spec, so any other Bluetooth device can receive the image without needing to pair. In a way, it works very similarly to Bump on smartphones. This is a nice workaround for easily sharing photos without the use of a cell connection. Not to mention that the device is available for a relatively cheap unsubsidized price. This allows flexibility for the user, and fosters competition in the market.

As a member of the Western world, it is far too easy to project your environment on the rest of the world. Dumbphones are still a useful product, and they’re fast becoming smart. It’s good to see Nokia serving this market — selling this as a stepping stone so that one day these markets will be using full-fledged smartphones. Anything that helps developing nations stay connected with the rest of the world at a reasonable cost is worthy of notice, so Nokia‘s Asha phones get a big thumbs-up.

Smartphones become capable of sensing human emotion

Taken from: http://www.extremetech.com/mobile/142486-smartphones-gain-the-ability-to-sense-human-emotion


Smartphones are amazing. They tell us where we’re going, let us know if it’s going to rain, and even act like personal assistants. Now, a new research project out of the University of Rochester aims to make your phone capable of sensing your emotions just from measuring how you’re speaking — not based on what you’re saying.
This research, titled the Bridge Project, focuses on small changes in the human voice. Rather than using the traditional methodology of self-reporting and monitoring body language, this new method is based on automatic passive emotion detection. This technology can always be listening and monitoring a patient’s emotional state without any work on his or her part — providing a bigger picture on the patient’s entire status.
The basics of the system involve measuring twelve different aspects of speech, and then mapping the data onto six different emotions. Wendi Heinzelman, professor of electrical and computer engineering, said that the project analyzed completely emotionless phrases of speech, such as saying dates of the month. Impressively, they are able to reach an 81% accuracy rating with this model while previous attempts were only around 55%. By having actors read scripts with certain performances, the researchers are able to tweak their algorithm to associate certain pitches, volumes, and harmonics to a specific emotional state.
While this is undoubtedly an invaluable tool for psychologists and medical researchers, it also has huge potential for consumers. Take a look at Apple’s Siri. It’s designed to appear more human-like by offering humorous answers, apologizing, and using more realistic speech, like “Let’s hear some Beatles,” instead of something with less flare, like “Now playing: The Beatles, track one.” This gives us a better experience because it mimics human interaction. Now, think about this technology integrated into Siri. When you’re getting frustrated, it could offer simple hints on how to interact better. When you’re sad, it could throw in compliments.

In Dr. Oliver Sacks’s book The Man Who Mistook His Wife For A Hat, he tells a story about a group of patients suffering from aphasia — an inability to understand words. In the story, he details how very capable these patients are in detecting emotion through speech. In fact, they are able to use sound cues to effectively communicate with their loved ones and doctors despite not being able to understand the words directly. He even notes that it is extremely difficult to execute a lie in front of an aphasiac because they are so adept at picking up the hidden emotion. This story truly illustrates how much of our emotional states are expressed verbally, and just how useful this research really is.

Think you have a big screen TV? Check out these monster video walls

Taken from: http://www.extremetech.com/extreme/137543-think-you-have-a-big-screen-tv-check-out-these-monster-video-walls



In honor of National Big Screen TV Day, aka Black Friday, we thought we’d share a couple massive screens unveiled by GE and by Stony Brook University that you can drool over. Now that CNN-sized interactive displays have become fairly commonplace, GE has upped the ante, unveiling a 180-degree, 40-foot, interactive video wall in its Toronto Customer Experience Center (CEC). Made by Prysm, Inc. of San Jose using Laser Phosphor Display (LPD) technology, the display will allow visitors to take a guided tour of GE products and technologies in an immersive setting.
Prysm’s proprietary LPD technology relies on a 405nm laser, similar to that used for Blu-ray, which is modulated as it is projected onto a phosphor layer. The phosphor layer is unusually thin, providing for a claimed industry-leading 178-degree viewing angle. Rather than attempt to light an entire large display with one laser, Prysm’s display walls are built from multiple tiles, each with its own laser engine, laser processor, and phosphor layer.
Prysm LPD display wall product promo image
The 10-foot-high video wall is constructed from more than one hundred 320×240 integrated LPD display tiles. The tiles support viewing from a full 178-degree field of view and are much lower power than a backlit projector system of similar size, or an LCD array like the one Sharp uses in its 5D attraction. Another advantage of the Prysm system over a multi-projector-based system like HP’s Photon is that it can support a variety of screen shapes — like the curved wall used by GE in this installation.
While the LPD display is not touch-enabled, it is controllable from an iPad used by the visitors’ GE host. GE and Prysm aren’t revealing the cost of the display, but it probably won’t be the deal of the day on LogicBuy any time soon.

Stony Brook takes you around the world — virtually

While the magic of the GE video wall is largely in the display technology, a research project at Stony Brook University pushes the envelope in processing power to create what it calls a “Reality Deck.” Featuring a record-shattering 1.5 billion pixels on 416 screens, the $2 million deck is powered by over 220 TFLOPS of processor power.
Stonybrook Reality Deck
The Deck actually has four walls, although of course we can only show three in this picture. 416 separate screens make up the walls, all driven by a massive graphics supercomputer, with 240 CPU cores, 80 GPUs and 1.2TB of memory. The display has what is called an “infinite canvas” feature, allowing it to change what is shown as a viewer walks around the deck area.
Source material for the Reality Deck can come either from massive muti-gigapixel panoramic images or architectural models which can be visualized in real time. To complete the experience, the system features a sound system with 22 speakers and four subwoofers.
The Reality Deck’s $2M price tag was funded by the National Science Foundation and Stony Brook University, as part of a project aiming to enable breakthroughs in healthcare, national security and energy research.
For those who thought the system in Minority Report was cool, or that a 4K 3D display would be as good as it gets, these systems point to an exciting future of Black Fridays full of discount video wall promotions.

3D-printed consumer electronics just became a reality

Taken from: http://www.extremetech.com/extreme/141669-3d-printed-consumer-electronics-just-became-a-reality

Embedding sensors and electronics inside of 3D objects in a single build process has been a long sought after goal in 3D printing (3DP). A group led by Simon Leigh, at the University of Warwick in England, has now done just that. Leigh’s group developed a low-cost material they call carbomorph – a carbon black filler in a matrix of a biodegradable polyester.
In addition to being conductive, carbomorph is piezoresistive. This means which that when it is bent or stressed, its resistance changes. Typically the resistance increases as the object is bent because the conductive grains are spread further apart. Piezoresistive strips of carbon nanotubes have been created previously by other groups and used in the measurement of movement, but printing them is something new.
3D printed flexi glove
The goal of Leigh’s group was to completely print a motion sensing glove in a single unbroken run. This required a machine with multiple heads, and their Bits from Bytes BFB3000 fit the bill. In one head they used used polylactic acid (PLA) to print the main body of the glove. The other head contained the carbomorph for the embedded sensing strips in each finger. The cross section of embedded strip was only .25 square microns yet proved sufficient for getting a robust piezoelectric signal to compute the bend angle.
In an effort to make their work freely available they published it in the open access journal PLoS ONE. The piezoresistive measurements were done using the popular Arduino Uno interface board and captured with Processing, an open-source software package for visualizing and manipulating data.
The group also printed capacitive buttons of the kind used in many common touch sensors, or as mouse replacements for human interface devices (HIDs). Capacitive measurements were also carried out with an Arduino, and implemented with the CapSense code library. The ability to print capacitive sensors potentially opens up 3DP to new areas including accurate measurement of distance, humidity, or acceleration.
3D printed capacitive mug
For the group’s final demonstration, things start to really get interesting. Two vertical capacitive sensor strips were embedded in the wall of a 3DP mug. This “smart vessel” yielded a reliable capacitance measurement which scaled linearly with the height of the fluid in the cup. One might imagine inexpensive party cups which report and summon a refill whenever a guest’s drink falls below a certain level.
Conductive 3D-printed materials, by nature of their composition, have only a fraction of metal or carbon’s electrical conductivity. Therefore at any interface with other electronics, where there will already be some unavoidable loss of any signal, extra care must be taken. It is for this reason that high-end audiophiles are willing to spend the extra money for gold-plated contacts — more signal is transduced and less is absorbed or reflected back to induce ringing or other unwanted noise.
Guess what this 3D printed object is?
In the case of capacitive button sensors, the group got around this problem by printing high-surface-area contacts in the shape of the commonly used banana-style plug. On the smart vessel they opted instead to use copper pads connected with silver conductive paint. There is no reason why copper or other metals might not someday also be printed. For example, several cancer treatments, like cisplatin, are basically metals bonded to chemical groups which make them soluble. This allows them to pass across membranes into cells or to be miscible with other solutions. Printing them in hydrophobic solvent which evaporates leaving behind the metal may one day be possible.
One thing yet to be done is to test the durability of the devices over time. If they are able to maintain the essential characteristics over many use cycles, and trips to the dishwasher, then these devices could find widespread application. Then again, if your product lifetime is only a couple of hours, like for a red Solo cup, they would already be perfect.

Saturday, November 24, 2012

Google Glass could be the virtual dieting pill of the future

Taken from: http://www.extremetech.com/extreme/140926-google-glass-could-be-the-virtual-dieting-pill-of-the-future


In a year or two, augmented reality (AR) headsets such as Google Glass may double up as a virtual dieting pill. New research from the University of Tokyo shows that a very simple AR trick can reduce the amount that you eat by 10% — and yes, the same trick, used in the inverse, can be used to increase food consumption by 15%, too.
The AR trick is very simple: By donning the glasses, the University of Tokyo’s special software “seamlessly” scales up the size of your food. In the video below, you see a person picking up what seems to be an Oreo cookie, and then the software automatically scales it up to 1.5 times its natural size. Using a deformation algorithm, the person’s hand is manipulated so that the giant Oreo appears (somewhat) natural. In testing, this simple trick was enough to reduce the amount of food eaten by 10%.
In the same video you can also see the inverse effect applied, shrinking the Oreo down to two-thirds its natural size. In testing, this increased food consumption by 15%. As you can see, the technology currently requires the use of blue screen chroma keying, but moving forward the Hirose-Tanikawa Lab research team hopes to improve the software so that it could work anywhere.

This new research dovetails neatly with an area of nutritional science that has received a lot of attention in the United States of Obesity recently: That the size of the serving/plate/cup/receptacle directly affects your intake. It has been shown time and time again that large plates and large servings encourage you to consume more. In one study, restaurant-goers ate more food when equipped with smaller forks; but at home, the opposite is true. In another study, it was shown that you eat more food if the color of your plate matches what you’re eating.
The fact is, there’s a lot more to dieting than simply reducing your calorific intake and exercising regularly. Your state of mind as you sit down to eat, and your perception of what you’re eating, are just as important — which is exciting news, because both of those factors can be hacked. Until now, the inherent bulk of computers has prevented them from meddling with your perceptions of reality — but with smartphones, and soon AR headsets, that is beginning to change. For now it’s just your vision, but through other augmentations and implants it shouldn’t be too difficult to alter your perception of touch, taste, and smell.

Microsoft demos English-to-Chinese universal translator that keeps your voice and accent

Taken from: http://www.extremetech.com/computing/139945-microsoft-demos-english-to-chinese-universal-translator-that-keeps-your-voice-and-accent



At an event in China, Microsoft Research chief Rick Rashid has demonstrated a real-time English-to-Mandarin speech-to-speech translation engine. Not only is the translation very accurate, but the software also preserves the user’s accent and intonation. We’re not just talking about a digitized, robotic translator here — this is firmly within the realms of Doctor Who or Star Trek universal translation.
The best way to appreciate this technology is to watch the video below. The first six minutes or so is Rick Rashid explaining the fundamental difficulty of computer translation, and then the last few minutes actually demonstrate the software’s English-to-Mandarin speech-to-speech translation engine. Sadly I don’t speak Chinese, so I can’t attest to the veracity of the translation, but the audience — some 2,000 Chinese students — seems rather impressed. A professional English/Chinese interpreter also remarked to me that the computer translation is surprisingly good; not quite up to the level of human translation, but it’s getting close.

There is, of course, a lot of technological wizardry occurring behind the scenes. For a start, the software needs to be trained — both with a few hours of native, spoken Chinese, and an hour of Rick Rashid’s spoken English. From this, the software essentially breaks your speech down into the smallest components (phonemes), and then mushes them together with the Chinese equivalent, creating a big map of English to Mandarin sounds. Then, during the actual on-stage presentation, the software converts his speech into text (as you see on the left screen), his text into Mandarin text (right screen), and then the Rashid/Chinese mash-up created during the training process is used to turn that text into spoken words.
The end result definitely has a strong hint of digitized, robotic Microsoft Sam, but it’s surprising just how much of Rashid’s accent, timbre, and intonation is preserved.
In terms of accuracy, Microsoft says that the complete system has an error rate of roughly one word in eight — an improvement of 30% over the previous best of one word in five. Such a dramatic improvement was enabled by the use of Deep Neural Networks, a machine learning technique devised by Geoffrey Hinton of the University of Toronto. A Deep Neural Network is basically an artificial neural network (software that models thousands of interconnected “neurons”), but with some tweaks so that it more closely mimics the behavior of the human brain.
Moving forward, the big question is when Microsoft Research’s speech-to-speech translation software will actually find its way to market — and yes, in case you were wondering, the software isn’t only limited to English and Chinese; all 26 languages supported by the Microsoft Speech Platform can be used, including Mandarin-to-English. The most obvious use case would be on your Windows Phone 8 (or 9?) smartphone, or Skype: You could call up a company in China or Germany or Brazil, speak normally in English, and they would hear your voice in their local language. You could also use your smartphone as a universal translator while travelling. As you can see below, Microsoft was toying with real-time phone-to-phone translation all the way back in 2010:

Presumably Microsoft is working on such applications — but it’s probably being held back by practical considerations, such as the processing power required to do speech-to-speech translation, or providing an easy-to-use interface for the training/learning process. The training process itself might require more processing power than a home user can feasibly provide, too. There’s always the cloud, though!

Stop worrying, and embrace RFID

Taken from: http://www.extremetech.com/electronics/141277-stop-worrying-and-embrace-rfid


Radio-frequency identification (RFID) is a simple way of using embedded chips as a form of tracking and authentication. It’s now fairly common to have pets implanted with RFID chips so they can be identified even without their collar attached. As RFID use has increased in frequency in the developed world, there has been a non-trivial amount of pushback from luddites the religious, and privacy advocates. In reality, RFID isn’t that scary, and we should embrace it.
Wired has an article explaining a recent kerfuffle between a student and her high school. Simply put, the school requires students to use RFID-equipped badges so they can track movement on campus for funding and truancy purposes. The student refused to wear the badge on religious and privacy grounds. In response, the school suspended her until she agrees to use the school ID. A legal battle ensued, and a judge temporarily lifted the school suspension until the case can proceed later.
RFID next to a grain of rice
In reality, these concerns are minor and based on fear of technology. This is just a tinfoil hat situation on a larger scale than normal. It appears from the known details about this story that these badges aren’t even being used at the individual class level. The low-tech method of having teachers taking roll call in class is even more refined than this RFID solution. If this was legitimately about privacy concerns, advocates would be against roll call in school as well. Instead, this whole situation is about fear mongering — not privacy concerns.
While there are some issues with the technology, specifically relating to other people accessing the information on the chip, this doesn’t showcase them. Preventing unauthorized access to the chip’s data is a problem, but it can be handled with cryptography. For example, requiring a password or using rolling codes can thwart evil-doers successfully. If you’re really worried about other people reading your RFID chip, it can be rendered harmless simply by covering it in a sleeve that works like a faraday cage.
Behavior is the real problem here — not technology. RFID is a useful tool that is already being used by companies like Walmart and organizations like the Department of Defense in the United States for authentication and tracking purposes. While RFID can be abused just like anything else, the technology isn’t inherently bad. Even the more paranoid among us should embrace RFID, and stop worrying about the tech so much. After all, common technology like smartphones and tablets are more susceptible to nefarious use. Give RFID a break.

Sunday, November 4, 2012

Turning the smartphone from a telephone into a tricorder

Taken from: http://www.extremetech.com/extreme/138658-turning-the-smartphone-from-a-telephone-into-a-tricorder



Earlier this year, well known cardiologist Eric Topol published his highly successful book, “The Creative Destruction of Medicine.” In it he describes several examples where smartphones, particularly the iPhone, have been morphed into first-rate medical devices with the potential to put clinical-level diagnostics in the hands of everyday users. Coincidentally, Topol was on a flight not long ago, returning from a lecture where he had spoken about a new device made by AliveCor. The pilot intoned an urgent, “is there a doctor on board?” In response, Topol took out the AliveCor prototype, recorded a highly accurate electrocardiogram (ECG) of an ailing passenger, and made a quick diagnosis from 35,000 feet.
BGStar and iPhone
As the leader in the smartphone revolution, the iPhone has been the platform of choice for early adopters in the health and quantified self arenas. Even so, there are a few shortcomings to development on the iPhone which, at least among DIYers, has led to Android becoming the path forward. Apple’s single-vendor solution and sequestering of many low-level input/output details behind the premise of ease of use have made interfacing the device to external sensors both a difficult and expensive proposition.
While it can be nearly impossible to write an Android app that will work on every device out there, writing an app to work on one’s own smartphone or tablet is fairly straightforward. Another challenge to the smartphone as a medical device is that many important sensor variables are analog in nature. It is possible to use the analog-to-digital converter on the audio input for data acquisition, however in the absence of sophisticated multiplexing one is limited to a single channel (unless some kind of expansion device is used).
Run tracking and calorie counting apps can certainly be regarded among the successes of the smartphone, but without dedicated sensor hardware, the philosophy of “there’s an app for that” only goes so far. A host of products now available for Android let users with a little bit of technical know-how create powerful devices previously found only in the domain of hospitals and law enforcement. One of the most successful expansion boards that allows Android devices to control external instruments and to orchestrate the collection of a variety of sensor data is the IOIO board. The system works well in wireless mode with most Bluetooth dongles, and its on-board FPGA gives 25 I/O channels, including plenty for analog input. It also handles analog output via pulse width modulation (PWM).
Vendors like Sparkfun, a popular supplier for the Arduino developer market, have realized the power inherent in readily programmable smartphones. They provide inexpensive heart monitors, as well as CO2 gas, dissolved oxygen, and blood alcohol content (BAC) sensors. These sellers provide documentation and, most importantly, access to the source code. With this information, interfacing with a BAC sensor, for example, is relatively straightforward and, if appropriately calibrated by the user, very accurate.
MK802
MK802 Android PC
USB stick computers running Android 4.0 (Ice Cream Sandwich) or newer, like the MK802, readily connect to boards like the IOIO, and can take the cost out of dedicating a phone or tablet to a sensor. They can log data to any of several storage mediums and cut a nice form factor when keyboards and displays are shed.
Despite the advances, a few ugly details in the smartphone-based health field are no longer capable of being ignored. The FDA will be increasingly faced with the task of deciding when a phone or tablet becomes a medical device that needs to be regulated as such, and when it is simply the front end for another device. Manufacturers of products for the seemingly straightforward task of monitoring glucose or insulin will have to tread carefully. Others seeking to enhance the absorption of medications through the skin by opening transient microchannels with current or ultrasound, perhaps built into a smartwatch, even more so.
In just a few years children wearing smart devices could become the norm. These gadgets could monitor variables like ambient peanut allergen using nanopore immunosensors with processing power to spare for forming dynamic early warning networks as conditions indicate. Without an efficient governance dispensing timely permission to use devices like the AlivecCor in humans, the initiation of life-saving care may too often begin with hardware designed and approved only for our pets. But if our regulatory structure organizes on the side of opening technological advancement, the future of these medical gadgets will be bright.

Why the 13-inch MacBook Pro with Retina display is Apple’s best laptop

Taken from: http://www.extremetech.com/computing/138605-why-the-13-inch-retina-macbook-pro-is-apples-best-laptop



On October 23, the big announcements from Apple raised many questions. What is the difference between the different MacBook Pros? With these Retina displays, where does this leave the Air? Turns out, the new MacBook Pro is jumping to the front of the pack.
If you’re looking for a new laptop with some dazzle, you now have two excellent choices in the MacBook Pro line. The 15-inch MacBook Pro was updated earlier this year with a screen that puts last year’s model to shame. Now, the 13-inch MacBook Pro is available with the same high-quality screen in a smaller package. Now that we have these high-res displays, it’s hard to imagine ever going back.
MacBook Pro with Retina display
The entry level 15-inch Pro is undoubtedly more powerful than the 13-inch. At $2199, it certainly better be. A bigger SSD, a quad-core processor, and a discrete graphics card make it a powerhouse. It’s a fantastic computer, but it just isn’t worth the extra weight and cost when compared to the 13-inch. The smaller model is a mere 3.57 pounds (1.6kg), while the 15-inch is almost an entire pound heavier. The entry level 13-inch is $1699, and that makes it $500 cheaper than its big brother. If you’re not specifically looking to dedicate your MacBook Pro to high-end gaming or video rendering, the 13-inch is clearly the better purchase. There are always tradeoffs between the two models, but this generation lands squarely on the 13-inch’s side.
So, what of the Air? Well, the cheapest 11-inch model is $999, and the 13-inch model starts at just $200 more. They’re certainly small and lightweight. The 11-inch is less than two and a half pounds (1kg), and the 13-inch is more than a half a pound lighter than the 13-inch MacBook Pro. Here’s the rub: they’re still underpowered, and their screens just don’t match up to the Retina-caliber displays in the Pro line. The 1440×900 resolution display n the 13-inch Air just doesn’t come anywhere close to the quality of the 2560×1600 resolution on 13-inch Pro. Until Apple gets around to updating the Air series, it is incredibly difficult to recommend them to anyone except the heaviest of travelers. To be fair, things will get a lot more interesting when we see Retina-caliber Airs.
MacBook Air
As it stands with Apple’s current laptop line-up, the 13-inch MacBook Pro comes out smelling like roses. It doesn’t have the raw horsepower of the 15-inch Pro, and it doesn’t have the extreme thinness of either Air, but it does have the overall best value. With its respectable internals, mid-range weight, and brilliant screen, the 13-inch Pro is simply the best laptop Apple sells now. A week ago, that wasn’t the case. The previous version of the 13-inch was good, but the new version is fantastic. Unless you have very niche and specific needs, this is the laptop you want to have.

New tack for OLPC: Let the students teach themselves

Taken from: http://www.extremetech.com/computing/138997-new-tack-for-olpc-let-the-students-teach-themselves



After delivering laptops to poor schools around the world (mostly in third world countries) in order to improve education, the One Laptop Per Child (OLPC) organization is trying something completely different: student self-learning.
OLPC decided to conduct an experiment. The organization picked two Ethiopian villages and dropped off Motorola Xoom tablets with locked-down software (that disabled the camera and froze the home screen settings) to the children there. The interesting part is that the workers gave absolutely no instructions on what to do with them. The boxes containing the tablets were still taped up when they got there.
Conventional wisdom says that the children would just play with the boxes and then get bored. That’s not what happened. Instead, the children opened boxes and then figured out how to switch on the tablets. And that was within four minutes of receiving the shipment.
Within a week, the children had completely figured out how to use the tablets and were using apps like crazy, up to 47 a day. Within two weeks, the children were learning the alphabet and the written word. Within five months, they hacked Android to bypass the camera restriction and customize the home screens!
OLPC's Xoom tablet
OLPC’s Xoom tablet
While these are promising results that show that children can learn and be creative on their own, conclusive findings will require more experiments over a multi-year period. OLPC founder Nicholas Negroponte stated that if it gets funded the organization would have to start over in another village to redo the experiment and observe for another “year and a half to two years.”
This is not the first time Negroponte has talked about this idea. Last year, he mentioned that some computers would be airdropped to children with no instructions whatsoever. This appears to be what he was talking about then, though apparently the Xooms were not airdropped.
This is an exciting development in education experimentation. However, we often forget that children are very capable absorbing information. Children are mentally flexible and learning-by-doing is how humans are supposed to be educated, after all. Indeed, the scientific method is simply a codified version of that for the purposes of research. While the first world has largely switched to instructional learning, developing nations still have huge swathes of children not taught in this manner.
If nothing else, this will be a very interesting experiment to follow, because it may result in some surprising conclusions that will improve education around the world.

iPad Mini review round-up: Apple’s beautiful but compromised cash-in

Taken from: http://www.extremetech.com/computing/139252-ipad-mini-review-round-up-apples-beautiful-but-compromised-cash-in



Early this morning, right on schedule and in perfect synchronicity, the first hands-on reviews of Apple’s iPad Mini hit the web. In a rather refreshing and pleasant twist, the reviews aren’t universally positive. In fact, it looks like Apple may have finally released a non-perfect mobile device — the perfect accompaniment to the rushed, botched release of Apple’s own Maps app in iOS 6 earlier this year.
At $330 for the cheapest, WiFi-only iPad Mini — a $130 premium over 7-inch tablets from Amazon and Google — we had expected a truly premium product. As far as the actual, physical device is concerned, reviewers universally agree that the iPad Mini doesn’t disappoint; as expected, the Mini looks and feels awesome. Beyond that, though, it seems Apple made a lot of compromises to bring the iPad Mini to market — compromises that really shouldn’t exist in a $330 device.
The 7-inch tablets
For a start, you can forget about Apple using some kind of magical hocus-pocus to ameliorate the issues caused by the low-resolution (163 PPI), 7.9-inch display. There is simply no getting around it: The iPad Mini, with just 1024×768 pixels to its name, isn’t as sharp as the competition. If you have used a higher-resolution device (such as almost every other tablet and smartphone on the market), the iPad Mini will look jaggy and fuzzy by comparison.
Then we have the internals — namely, the A5 SoC that debuted with the iPad 2, 18 months ago. To be quite frank, it’s utterly insane that Apple thought it could use an old chip and get away with it. Almost every review notes that apps can take a long time to load, and some others note that the device just feels laggy — a bit like the iPad 2, even, which was powered by the same SoC. The minimal amount of RAM (512MB) might also play a role here. On a plus side, the iPad Mini’s rear-facing camera is being reviewed very positively.
There also seem to be a few complaints about the actual form factor of the device, too. By sticking to the iPad’s 4:3 aspect ratio, the left/right bezels on the iPad Mini are very small, which means it can be quite hard to hold the device in portrait orientation. Apple has tweaked the Mini’s version of iOS so as to ignore accidental thumb taps, but reviews suggest that this feature isn’t quite perfect yet, sometimes resulting in intentional swipes and taps being ignored. Some reviewers also say that the iPad Mini’s incredibly svelte dimensions (it’s just 0.68lb/308g and 7.2mm thick) are almost too thin and light to get a proper purchase. The Nexus 7 and Kindle Fire, with their 16:9 aspect ratios and significantly fatter bodies, obviously don’t have this issue.
iPad Mini, front, back, side

Milking the cash cow

The obvious question that we have to ask is why? Why did Apple compromise so brutally on the internals of the iPad Mini? For a few cents more, Apple could’ve put an A5X, A6, or A6X inside the iPad Mini, ameliorating any performance issues and instantly making it the fastest tablet on the market. For a few dollars more, Apple could’ve sourced a high-res display. But it didn’t — why?
The only explanation that fits is that Apple is intentionally low-balling the consumer, just to make more money. By making the iPad Mini beautiful, and cheaper than the real iPad, Apple guarantees millions of sales — even if the hardware spec isn’t up to scratch, or there are a few rough usability edges. Then, in six months, Apple can release the iPad Mini 2, with a faster processor, more RAM, and perhaps a Retina display — and boom, another billion dollars of profit.
In England, we have a delightful idiom: mutton dressed as lamb. It succinctly describes the act of taking something slightly old, haggard, or cheap, and dressing it up as something new. The American equivalent, I think, is putting lipstick on a pig. Apple could’ve produced the iPad Mini two years ago — but it didn’t, because the iPad and iPhone were generating more revenue than you could ever imagine. Now, with the Nexus 7 and Kindle Fire HD establishing a 7-inch beachhead, Apple has been forced to respond. The cynic in me says that Apple has the iPad Mini waiting in the wings for years, but perhaps I ought to adjust my tinfoil hat.
Steve Jobs and his iPad
Historically, I would’ve said that Apple knows exactly what it’s doing — that it knows exactly how to play to its strengths of superlative industrial design, and masterful control of human needs and desires. With the recent firing of Scott Forstall (who was in charge of the iOS 6 Maps debacle earlier in the year), though, and now the iPad Mini, perhaps Apple is off balance, on tilt.
I’m still absolutely sure that Apple will make a fortune from the iPad Mini, but I’m less positive about the company’s long-term prospects. It seems the company has undergone a sizable ideological shift. The old Apple — Steve Jobs’ Apple — was all about building visionary products, and then inviting people to come play with them. The new Apple seems more interested in simply making as much money as possible, as quickly as possible — which works in the short term, but will come to an abrupt halt when another company takes up the visionary mantle.

How technology is creating a reading revolution

Taken from: http://www.extremetech.com/computing/139052-how-technology-is-creating-a-reading-revolution


Reading has truly seen a big change in the last few years. With high-definition video, hyper-real video games, and high-quality audio so readily available, it is a little counter-intuitive that boring old books, and the technology behind them, are still going from strength to strength. Really, there has never been a better moment in history if you like reading books.
The technology of reading can’t be discussed without bringing up electronic paper. The technology that drives the Kindle %displayPrice% at %seller% and the Pebble has made low-power and long-lasting dynamic reading devices possible. Not only can electronic paper (e-ink) devices be used in direct sunlight without having glare issues, it also has the benefit of only needing to draw power when the display updates. Just like its printed counterpart, an e-ink page draws no power, so it can be left on all of the time.
Kindle Paper White
At the same time, more traditional backlit displays are getting much better for reading. The incredibly high-res displays in the iPad $569.00 at J&R, Kindle Fire HD, and recently-announced Nexus 10 allow for images to be displayed at such a high resolution that the human eye cannot distinguish individual pixels. That’s what Apple calls “Retina.” Even phones have become good reading devices, now that squinting and zooming is no longer needed to make out a paragraph. These high-quality screens are making huge steps forward in readability, and it is forcing the traditionally low-res computer monitors to get their acts together.
It’s not all about hardware, though. Content availability and pricing are a huge factor in this reading renaissance. Places like Project Gutenberg, Google Books, and the Internet Archive are offering countless public domain and creative commons books. The Amazons, Googles, and Apples of the world are offering new e-books in the eight to fifteen dollar range. Now that WiFi and cellular have proliferated North America and Europe so heavily, you can download a new book virtually anywhere you go in highly populated areas. Even if your connection is spotty, flash storage allows us to have entire libraries with us at all times. This truly is a revolution.
Instapaper using the OpenDyslexic font
This is a boon for most of us, but what about people with disabilities? Recent technology has probably changed the most in the area of reading for the disabled. Built into every Mac, iOS, and Windows device is a text-to-speech engine, so that simply selecting text will allow the user to play it back out loud. The wonderful OpenDyslexic font might look strange at first sight, but it’s designed so that people with dyslexia can more easily read text. Apps like Instapaper are already including it. Audiobooks, once something very pricey and inconvenient, are now affordable and easy. Audible and iTunes are both great places to get high quality audiobook content that will play on pretty much any device you want. Digital distribution and the proliferation of computers and smartphones have made reading much more accessible to the visually impaired, and that is something for the industry to be very proud of.
Reading has never been easier, and we keep seeing breakthroughs every year. Without a doubt, we have the capacity to be the most well-read generation in history.

Tips and tricks for clearing up a cluttered hard drive

Taken from: http://www.extremetech.com/computing/139359-tips-and-tricks-for-clearing-up-a-cluttered-hard-drive

No matter how big our hard drives get, we’ll end up filling them to the brim. Spinning disks are increasing in size rather rapidly at a very low price, but SSDs are still relatively expensive and small. Managing your data is particularly important when you’re on a laptop with the small amount of wiggle room as far as external disks go. It’s not really very practical to schlep around a USB 3 or Thunderbolt drive everywhere you plan on bringing your laptop. Using these tricks, you’ll be able to identify what is taking up the most room, gain back some of your space with file compression, and know how much space you should have free at any given time.

Fragmentation

First off, how much space do you really need to be free on your main drive, and why can’t you just use it all? Two things: swap space and fragmentation. If your disk is too full, your computer doesn’t have any room to move data out of RAM. This can make your computer act up something awful, and can even lead to freezing up. I have a relatively small Bootcamp partition on my iMac, and I wasn’t particularly watchful about how much space I was using. After a few lock-ups, I realized I was down to only about 4% of my total drive’s capacity. You’ll want to leave around 10% of your drive capacity available at all times so your drive doesn’t become overly fragmented. This is much less of an issue with SSDs, but hard drives can lose a lot of performance if overly fragmented. In fact, sometimes you won’t even be able to effectively defragment your over-full drive unless you boot from another disk.
Grand Perspective Screenshot

Freeing up space

Now, how can you tell what is taking up so much space on your drive? Sure, you can always take a peek at the properties of known media-heavy folders. There is a much better way of visualizing what’s using your disk, though. Apps like GrandPerspective (pictured above), DiskInventoryX, WinDirStat, DaisyDisk, and KDirStat are a godsend for the data packrats among us. These apps show a scaled and color-coded visualization of your entire disk. The bigger a rectangle is, the more space it takes up. This is particularly useful for sniffing out pesky large files, like unused virtual machines and remnants from video editing projects.
Clusters Screenshot

Compression

So, what if you can’t get rid of anything on your drive, but you still need a bit more space? There is the option of file compression. Your computer’s file system can dynamically compress and decompress data on the fly. Since Snow Leopard, this has been available in Mac OS X’s HFS+ filesystem, but NTFS (Windows) and ZFS have this capability as well. On the Mac, a simple app called Clusters is available for only $12.95, and it allows you to selectively choose which files and folders you want to be compressed. On Windows, it’s as simple as right-clicking what you want to compress, going into the properties, and toggling a checkbox. Of course, all of this can be managed through the command line, but GUIs are a better option if you’re new to transparent file encryption.
Compression doesn’t just magic up free space, though. Even on your super-fast SSDs, this does come at a performance cost. Your computer has to decompress the data every time you access it. Don’t go too crazy with your compression, or you’ll end up looking at your watch and tapping your toes while your computer tries to catch up.
With a few tools and a little vigilance, your drive doesn’t have to be full anymore. Your best bet is always to buy a bigger drive, but it won’t be long until that’s filled up as well.

Saturday, October 20, 2012

Google Wallet Exec: No Surprise Digital Payments Are Slow Going

Taken from: http://allthingsd.com/20121019/google-wallet-exec-no-surprise-digital-payments-are-slow-going/?refcat=mobile


It took 50 years for the credit card to become the dominant means of payment, so it shouldn’t be surprising that mobile payments haven’t immediately taken off.



Everyone is expecting change to happen in weeks or months, but it will take time, says Osama Bedier, Google’s VP of Wallet and payments. “We will have mobile payments,” Bedier said, speaking at the Global Mobile Internet Conference in San Jose on Friday.

There’s room for more than one player, Bedier said, but each has to solve an issue. “There’s a lot of ideas and not a lot of problems being solved,” Bedier said. “Credit cards already work pretty well if all you have to do is payments.”

Ultimately, the former PayPal exec said that mobile payments have to either save time, save money or both. Technology can do that, he added. On the technology front, Bedier said he remains a believer that near field communication technology (NFC) will be ubiquitous on both phones and payment terminals within five years. NFC will also find its way into many other places in the logistics chain. “NFC chips will replace bar codes,” Bedier said.

But many believe NFC will take a year or more to take off, especially now that Apple declined to embed the chips into its latest release, the iPhone 5. Other payment companies, like PayPal, have decided to find other avenues to enabling digital payments without it, and companies like Starbucks are relying on something as simple as a barcode.

Bedier said at least half of transactions will be mobile within five years, but remained short on details on how much volume Google Wallet is doing today. “The numbers are compelling,” he said, without revealing any of those compelling numbers.

A rival executive from Scvngr, which runs a payments service called LevelUp, recently tweeted somewhat hyperbolically that Google Wallet has five users.

“We have a lot more than five users,” Bedier said, though he wouldn’t say how many customers they have. He did say that the company doubled its transaction volume in the first few weeks after transitioning its Wallet transactions to the cloud back in August. Still, the company faces some obvious adoption hurdles because today Google Wallet is only available on NFC-capable Android phones through one U.S. carrier: Sprint.

“We’re seeing that trajectory continuing,” he said.

The other three carriers — AT&T, Verizon Wireless and T-Mobile — are backing ISIS, which is launching next week. Bedier acknowledged the lack of support from carriers for Google Wallet. “We haven’t yet seen eye to eye on a mobile wallet solution,” he said. “So far, they have said they want to do their own thing and we respect that.”

The Pirate Bay moves to the cloud to evade the police

Taken from: http://www.extremetech.com/computing/138037-the-pirate-bay-moves-to-the-cloud-to-evade-the-police



In a tenacious move that should finally make The Pirate Bay “raid proof,” the world’s largest torrent site has moved… to the cloud!
First The Pirate Bay got rid of its trackers. Then it stopped hosting torrent files. And now, in the words of TPB’s Winston Brahma, “we’ve gotten rid of the servers.” He nebulously continues: “Slowly and steadily we are getting rid of our earthly form and ascending into the next stage, the cloud.”
In short, The Pirate Bay website (the search engine that you use to look for torrent magnet links) is no longer hosted on hardware owned and managed by the TPB admins. Instead, the website is now hosted by multiple cloud companies in two separate countries. There are still two important pieces of TPB-owned hardware, however: A load balancer that encrypts requests before passing them along to the cloud instances, ensuring that the cloud providers themselves don’t know the identity of users accessing The Pirate Bay, and a transit router.
In theory, if TPB was raided by the police today, all they would find is a transit router. If they followed the data trail they would come across the load balancer, which in this case is a diskless server “with all the configuration stored in RAM.” (The theory here is that, if the police turn the server off, they will lose the IP addresses of the cloud instances where the TPB website is actually stored).
Whack-a-Mole: ISPs vs. The Pirate Bay
If the police finally work their way back to the cloud hosts, all they will find are encrypted virtual machine images (and hopefully no logs). There are backups of these disk images, of course, which can be deployed to other cloud hosts with relative ease. Perhaps more importantly, if these servers lose contact with the load balancer for more than eight hours, they automatically shut down, eliminating any chance of the police accessing their data.
On a practical level, though, there are still plenty of ways to shut down The Pirate Bay. The load balancer will be hosted with a major ISP — and it’s easy enough for the feds to compel an ISP to pull the plug. The site would then be down until TPB can install another load balancer — which probably isn’t cheap or easy. Moving to the cloud does nothing to prevent DNS blackholing, too — though TPB’s move to embrace IPv6 definitely helps in that regard. Realistically, if we cut through all the hyperbole, the real advantages of moving to the cloud are less down time, and less expenditure for the TPB admins.
Personally, I’m glad to see that The Pirate Bay settled on cloud hosting. For a while it was considering a fleet of low-orbit server drones, which was a little bit on the crazy side.

Wednesday, October 17, 2012

Barbican's Rain Room: it's raining, but you won't get wet

Taken from: http://www.barbican.org.uk/news/artformnews/art/visual-art-2012-random-internati

 
Have you ever been caught in a terrible downpour and wished you could make it stop? Do you want to have the power to control the weather? Now, your dream can come true.
Rain Room, a new 3D exhibition at London's Barbican Centre marries art, science, and technology to do just that. It was designed by studio Random International and has been premiere at The Curve, Barbican on Oct. 4th 2012. The Rain Room is 100 square metre field of falling water.

When you entre the rain room, you can hear the sound of water and feel the moisture in the air while discovering thousands of falling drops that will respong to your presence and movement. Sensors detect where visitors are standing and the water will halt above the head. So despite standing in a space filled with falling rain, visitors remain dry.

This is the application of 3D sensory cameras fixed to teh ceiling of the Rain Room. Every person who walks into the room is recognized. As you move around "slowly", the rain stops overhead.More than just a technical work, Rain Room is about people. Described as a "social experiment" by one of the artists who created it, Rain Room is a terrific way to see different reactions and interactions.
This experience will bring you a feeling of being peaceful. It is quite different from standing in the rain with an umbrella. Since you don't hear the sound of the rain battering on the umbrella.
Rain room at The Curve runs until March next year. So if you get a chance, don't miss it, to be close to the rain.

Monday, October 15, 2012

MIT creates carbon nanotube pencil, doodles some electronic circuits

Taken from: http://www.extremetech.com/extreme/137555-mit-creates-carbon-nanotube-pencil-doodles-some-electronic-circuits

A team of MIT chemists have created a carbon nanotube “lead” that can be used to draw freehand electronic circuits using a standard, mechanical pencil.
In a normal pencil, the lead is usually fashioned out of graphite and a clay binder. Graphite, as you may already know, is a form of carbon that is made up of layer after layer of the wonder material graphene. When you write or draw with a graphite pencil, a mixture of tiny graphene flakes and clay are deposited on the paper, creating a mark. (Incidentally, pencil leads never contained lead; it’s just that when graphite was first used in the 1500s, they thought it was lead ore, and the name stuck).
With MIT’s carbon nanotube pencil, the lead is formed by compressing single-walled carbon nanotubes (SWCNT), until you have a substance that looks and behaves very similarly to graphite. The difference, though, is that drawing with MIT’s pencil actually deposits whole carbon nanotubes on paper — and carbon nanotubes have some rather exciting properties.
In this case, MIT is utilizing the fact that SWCNTs are very electrically conductive — and that this conductivity can be massively altered by the introduction of just a few other atoms, namely ammonia.
In the picture above, electricity is applied to the gold electrodes (which are imprinted in the paper). The carbon nanotube pencil is used to fill in the gaps, and effectively acts as a resistor. When ammonia gas is present, the conductivity of the nanotubes decreases, and thus resistance increases — which can be easily measured. Carbon nanotubes are so sensitive that MIT’s hand-drawn sensor can detect concentrations of ammonia as low as 0.5 parts per million (ppm).

There are two main takeaways here. The first is that MIT has found a form of carbon nanotubes that is stable, safe, and cheap to produce. Second, carbon nanotubes have been used in sensors before, but usually the process involves dissolving SWCNTs in solvents, which can be dangerous. Here, creating a carbon nanotube sensor is as simple as drawing on a piece of paper — either by a human, or an automated process.
The team will now work on other carbon nanotube leads that can be used to detect other gases, such as ethylene (produced by fruit as it ripens) and sulfur (for detecting natural gas leaks). It’s also worth noting that the research was partly funded by the US Army/MIT Institute for Soldier Nanotechnologies — so it wouldn’t be surprising if military personnel are eventually outfitted with these sensors… or perhaps their very own carbon nanotube pencil, for MacGyver-like sensor fabrication in the field.

Saturday, October 13, 2012

New encryption method avoids hacks by saving your password in multiple locations

Taken from: http://www.extremetech.com/computing/137606-new-encryption-method-avoids-hacks-by-saving-your-password-in-multiple-locations

One of the central problems in modern computer security is the need to protect an ever-increasing amount of user data from an enormous array of potential threats. Researchers at the security firm RSA have proposed a new method of securing passwords from database hacks: breaking them into pieces and storing them in separation locations. From the user’s perspective, nothing would change — you’d visit a website, type your password, and log in normally.
Server-side authentication, however, would be considerably different. Currently, when you transmit a password to a website, the password is typically hashed or encrypted in some fashion. The server doesn’t actually store your password in plaintext, but the encrypted value of your password. Cryptographic hashing functions are theoretically reversible, but well-designed hashes are impossible to brute-force within reasonable amounts of time, using conventional hardware. The problem with this approach is it looks good in theory, but is often flawed in practice.
What RSA is proposing is a system that would break a password into two halves, and then store each half in different locations — perhaps on a different hard disk in the same data center, or maybe on the other side of the world. The separate versions are then hashed again to create a new string. The two password servers would then compare the new string to determine if the hash values match. If they do, the password is legitimate. If they don’t, the login fails.
Splitting the password between multiple servers ensures that if one server was compromised, the hackers would gain nothing but halved hash values with no way to combine them into an appropriate authentication scheme. Without knowledge of the second combination function, there’s no way to reverse engineer the hashes back to plaintext. The halved password hashes themselves would refresh periodically, further limiting the usefulness of a database hack.
This last issue highlights one of the most frustrating facts about online security. The size and scope of the technical attack vectors is overwhelming and shifts on a yearly basis as new methods are discovered or come into vogue. Still, the RSA approach does at least close one potential loophole, and it’s scalable. The current implementation uses two servers, but the method could be deployed across four or more separate locations.

Using Kinect to turn any surface into a multi-user, multi-finger touchscreen

Taken from: http://www.extremetech.com/computing/137630-using-kinect-to-turn-any-surface-into-a-multi-user-multi-finger-touchscreen

With the aid of Microsoft’s much-loved Kinect sensor, engineers at Purdue University have created a system that can turn any surface — flat or otherwise — into a multi-user, multi-finger touchscreen.
The setup is disgustingly simple: You point the Kinect at some kind of surface, plug it into a computer, and then proceed to poke and prod that object as if it was a multi-finger touchscreen. Additionally, you can throw a projector into the mix, and the object becomes a multitouch display — a lot like (the original) Microsoft’s Surface tabletop display (which is now called PixelSense, as Microsoft decided to re-use the trademark with its upcoming tablets).
The magic, of course, is being performed in software. The first time you use the system, which the engineers call “Extended Multitouch,” the Kinect sensor analyzes the surface it’s pointing at. If you want to use a table as a touchscreen, for example, the Kinect takes a look at the table and works out exactly how far away the surface is (a depth map, in essence). Then, if you want to interact with the touchscreen, the Kinect works out how far away your fingers are — and if they’re the same distance as the table, the software knows you’re making contact. The software can do this for any number of fingers (or other implements, such as pens/styluses).
To test Extended Multitouch, the engineers wrote a few simple programs, including a sketching program that allowed users to draw with a pen and multiple fingers simultaneously. Overall accuracy is good, but not as reliable as displays with built-in touch sensors (such as your smartphone). As far as handedness and gesture recognition goes, accuracy is around 90% — but that will increase with sensor resolution and continued software tweaks.
Unlike built-in touch sensors, which are expensive and require a local computer controller, it would be quite easy to blanket a house in depth-sensing cameras. If you buried a grid of Kinects in your ceiling, you could turn most of your environment into a touch interface. In the words of Niklas Elmqvist, one of Extended Multitouch’s developers, “Imagine having giant iPads everywhere, on any wall in your house or office, every kitchen counter, without using expensive technology.”

Sunday, October 7, 2012

Backup master class: Are online file lockers backup?

Taken from: http://www.extremetech.com/computing/137429-backup-master-class-are-online-file-lockers-backup

Cloud storage has also become extremely popular. Asus offers online storage as a perk with its tablets. Google, Amazon, and Microsoft all have their own versions; Dropbox passed the 50 million user mark a year ago. This article discusses file lockers in general, but since we’re primarily concerned with backup rather than file sharing, it can’t be characterized as a review.
Most cyberlockers prioritize their sharing capabilities and don’t literally refer to themselves as backup services — but the language is very similar. For example:
  • Amazon Cloud: “Never worry about losing your precious photos, documents and videos. Store them in your Cloud Drive where they will be protected from a hard drive crash or a lost or stolen laptop.”
  • Dropbox: Even if you accidentally spill a latte on your laptop, have no fear! You can relax knowing that Dropbox always has you covered, and none of your stuff will ever be
  • Google Drive: Things happen... No matter what happens to your devices, your files are safely stored in Google Drive.” (emphasis original)
  • Microsoft SkyDrive: “With SkyDrive, you can securely store your files.. The files and photos you store in SkyDrive are protected by first-rate security features.”

Do file lockers count as online backup services?

Yes and no. File lockers are backups in the sense that they store files in an offsite location. If your hard drive suddenly dies, the files you copied to a file locker will still be there. Like online backup services, most file lockers retain multiple versions of a file to allow you to revert changes to a document. Some also offer undelete protection; Microsoft’s SkyDrive recently implemented a Recycle Bin feature that allows you to recover files for up to 30 days after you’ve deleted them.
Mozy's backup selection
File locker folders
One of the key distinctions between a backup service and a file locker is that modern backup services typically auto-select key folders and files to back up by default. File lockers, in contrast, drop a new folder into Windows Explorer and have done. You can use Dropbox or Google Drive as a backup — provided that you manually configure all the files you want to back up to drop into appropriate subdirectories.
File lockers are often one component of a larger service, even if they’re offered to anyone who wants one. You don’t have to own a Kindle, use Google Docs, or make use of Microsoft’s various online services to use Cloud Drive, Google Drive, or Sky Drive, but all three companies make certain you’re aware of those options at each and every turn. Most content management is either handled in Windows Explorer or via a basic browser interface.
Cloud lockers also tend to handle sharing and locality differently than backup services. Mozy and Carbonite offer mobile clients that you can use to access archived data, but they don’t include external links that another person can use to access your files. As for locality, backup services nearly unilaterally insist that any data that’s to be kept backed up is also kept locally. Many file lockers offer a more nuanced policy.

Scientists planning $1 billion mission to drill into the Earth’s mantle

Taken from: http://www.extremetech.com/extreme/137281-scientists-planning-1-billion-mission-to-drill-into-the-earths-mantle


Deep below the our feet, past the thin crust of our planet, lies the mantle. Despite making up the vast majority of the Earth’s mass, we know very little about the composition of this region. What could be there? Mole-men? Crab people? Probably nothing so fanciful, but a team of international researchers are about to find out. At an estimated cost of $1 billion, geologists headed by the Integrated Ocean Drilling Program (IODP) are preparing to start drilling into the mantle for the first time.

The mantle is a 3000 km thick layer of super-heated, mostly solid, rock that fills the space between the human-dominated crust, and the dense iron-rich core. It is believed that knowing more about the makeup of the mantle could have significant impact on our understanding of the origins and nature of Earth. Everything from seismology to climatology and plate tectonics could be affected.
This isn’t going to be an afternoon excursion to the drilling rig, though. It’s going to take a long time to reach the mantle, which is a minimum of 6 km beneath the crust under the best conditions. After considering various methods, researchers have decided to drill through the crust in the Pacific Ccean, which is the only place the 6 km figure holds true. On dry land the crust can be ten times thicker.
To reach the mantle, scientists will be using a custom-built Japanese drilling rig called Chikyu. The Chikyu was first launched in 2002, and is capable of carrying 10 km of drilling pipes. The team is going to need most of that to get down the the seabed and through to the mantle. The Chikyu holds the current deep sea drilling record, having made it 2.2km into the seafloor. This will be a much greater challenge.
Earth
The goal is impressive all on its own, but it isn’t until you look at the logistics of making it all work that you realize what a monumental undertaking this is. The high-tech drill bits being used to bore down into the crust only have an active lifespan of 50-60 hours. After that, the team will have to back out of the hole, change the bit, and plunge back down to the murky depths. To top it off, the borehole is only a 30 cm across… and at the bottom of the sea.
One researcher involved in the project, Damon Teagle, described this procedure as trying to align a steel tube the width of a human hair with a 1/10mm hole when it’s at the bottom of a swimming pool. Certainly accomplishments like the Curiosity/MSL landing are an example of great science, but here we have some amazingly precise science happening right on Earth.
With the technology available today, researchers at the IODP believe that it will take years to reach the mantle. It’s going to be time consuming to change out those drill bits every few days. Teagle suspects that the project could get underway within the next few years. Barring a significant advancement in drilling technology, we should get our first samples from the mantle in the early 2020s.
This project might not have the sexiness of landing a rover on Mars, but it has the potential to vastly increase our knowledge about the evolution and fate of our planet. For the time being it’s the only one we’ve got, so that’s important knowledge to have.

Microsoft promises major Windows 8 app improvements before Oct 26 launch

Taken from: http://www.extremetech.com/computing/137469-microsoft-promises-major-windows-8-app-improvements-before-oct-26-launch

Microsoft has taken some flak for the purported condition of Windows 8 in recent weeks; Intel and Redmond tangled on the topic last week in a bit of corporate he-said/she-said. A new blog post from the Building Windows 8 team indirectly addresses some of the concerns potential W8 adopters might have in the wake of the public spat, by promising a number of updates will be delivered between now and launch day.
According to Gabriel Aul, new updates will start rolling out today with an improved Bing app, but that’s just the beginning. After Bing, Microsoft is rolling out improvements to SkyDrive, Mail, Photos, Maps, its News service, and a number of others.
The full list of app changes is available on the BW8 blog. We’ve put together a some of the highlights and most important differences below:
One of the most significant deficiencies of the current Mail client in Windows 8 is that it lacks support for POP or IMAP. Microsoft apparently doesn’t plan to support POP at all, but IMAP, at least, is coming before launch day. Other changes are clearly intended to improve Metro functionality (Windows Photo) and simplify switching between the Window 8-style UI (Metro) and Desktop. Content pagination and zoom levels are other areas where Windows 8 has been measured and found wanting; these updates will hopefully solve some of the ongoing problems.
Closed caption support, meanwhile, might seem like a low-priority issue, but it’s actually a major concern for the hearing impaired. The FCC ruled earlier this year that broadcasters had to begin including closed caption support in streaming video; Windows 8 support is an important step to making such capability ubiquitous.
Other improvements are less clear. “Improved offline reading experience” and “rich ‘now playing’ experience” don’t tell us much about the new features or what users can expect. Integrating content from the New York Times and Wall Street Journal will improve the range of information “News” presents, but we’ll be curious to see how Windows 8 treats the NYT’s 10-article-a-month preview option and the WSJ’s paywall.
Part of noteworthy here is how Redmond has successfully changed updates from something it delivers to a monolithic OS to fix security issues into targeted features and bug fixes that expand application capability. To be sure, some of that expanded capability is being put towards things Windows 8 should’ve done before it went RTM — but applying continuous improvements on an app level lets the company talk about new features far more effectively than the standard Windows Update screen.

Sunday, September 30, 2012

MIT develops holographic, glasses-free 3D TV

Taken from: http://www.extremetech.com/computing/132681-mit-develops-holographic-glasses-free-3d-tv


The masterful engineers at the Massachusetts Institute of Technology (MIT) are busy working on a type of 3D display capable of presenting that elusive third dimension without any eye gear. We say “elusive” because what you’ve been presented at your local cinema (with 3D glasses) or on your Nintendo 3DS console (with your naked eye) pales in comparison to what these guys and gals are trying to develop: a truly immersive 3D experience, not unlike a hologram, that changes perspective as you move around.

Today’s 3D technology falls short in a number of ways. The most obvious is the need for special viewing glasses that may be uncomfortable to wear, darken the on-screen imagery, and are prone to annoying finger smudges that are a bear to wipe off.

Nintendo’s 3DS console is one such device that dispenses with the need for eye gear by using two layered liquid crystal diode (LCD) screens to create the illusion of depth. Offset images create a sense of perspective, while alternating light and dark bands emanating from the bottom screen ensure your eyeballs only take in the images they’re supposed to at any given moment. It’s a serviceable recipe for rudimentary glasses-free 3D, albeit on a small scale suitable for handheld consoles.

Glasses Free 3D TV
In order to produce a convincing 3D illusion, MIT's three-panel technology requires a display with a 360Hz refresh rate.
What the researchers at MIT have come up with is a more sophisticated way to paint a 3D scene that changes perspective as you move around. It does away with the need to sit in a fixed, optimal position (think of how in a movie theater, everyone views the same perspective regardless of where they sit), and in fact could ultimately encourage changing your viewing angle, depending on how creative developers get with the technology. To give you an example, imagine leaning left in your chair to spy an enemy crouched behind a crate in a first-person shooter (FPS).

The project is called High Rank 3D (HR3D). To begin with, HR3D involved a sandwich of two LCD displays, and advanced algorithms for generating top and bottom images that change with varying perspectives. With literally hundreds of perspectives needed to accommodate a moving viewer, maintaining a realistic 3D illusion would require a display with a 1,000Hz refresh rate.

To get around this issue, the MIT team introduced a third LCD screen to the mix (pictured above). This third layer brings the refresh rate requirement down to a much more manageable 360Hz. More importantly, it means short term application of this technology is possible. Currently, TV technology maxes out at 240Hz, so a high-speed panel in the 360Hz range isn’t all that far-fetched.

The researchers plan to present a tri-panel prototype display at Siggraph. In the meantime, it’s worth carving out three and a half minutes of your time to watch the video below, which explains the technology in visual detail.