Sunday, September 30, 2012

MIT develops holographic, glasses-free 3D TV

Taken from: http://www.extremetech.com/computing/132681-mit-develops-holographic-glasses-free-3d-tv


The masterful engineers at the Massachusetts Institute of Technology (MIT) are busy working on a type of 3D display capable of presenting that elusive third dimension without any eye gear. We say “elusive” because what you’ve been presented at your local cinema (with 3D glasses) or on your Nintendo 3DS console (with your naked eye) pales in comparison to what these guys and gals are trying to develop: a truly immersive 3D experience, not unlike a hologram, that changes perspective as you move around.

Today’s 3D technology falls short in a number of ways. The most obvious is the need for special viewing glasses that may be uncomfortable to wear, darken the on-screen imagery, and are prone to annoying finger smudges that are a bear to wipe off.

Nintendo’s 3DS console is one such device that dispenses with the need for eye gear by using two layered liquid crystal diode (LCD) screens to create the illusion of depth. Offset images create a sense of perspective, while alternating light and dark bands emanating from the bottom screen ensure your eyeballs only take in the images they’re supposed to at any given moment. It’s a serviceable recipe for rudimentary glasses-free 3D, albeit on a small scale suitable for handheld consoles.

Glasses Free 3D TV
In order to produce a convincing 3D illusion, MIT's three-panel technology requires a display with a 360Hz refresh rate.
What the researchers at MIT have come up with is a more sophisticated way to paint a 3D scene that changes perspective as you move around. It does away with the need to sit in a fixed, optimal position (think of how in a movie theater, everyone views the same perspective regardless of where they sit), and in fact could ultimately encourage changing your viewing angle, depending on how creative developers get with the technology. To give you an example, imagine leaning left in your chair to spy an enemy crouched behind a crate in a first-person shooter (FPS).

The project is called High Rank 3D (HR3D). To begin with, HR3D involved a sandwich of two LCD displays, and advanced algorithms for generating top and bottom images that change with varying perspectives. With literally hundreds of perspectives needed to accommodate a moving viewer, maintaining a realistic 3D illusion would require a display with a 1,000Hz refresh rate.

To get around this issue, the MIT team introduced a third LCD screen to the mix (pictured above). This third layer brings the refresh rate requirement down to a much more manageable 360Hz. More importantly, it means short term application of this technology is possible. Currently, TV technology maxes out at 240Hz, so a high-speed panel in the 360Hz range isn’t all that far-fetched.

The researchers plan to present a tri-panel prototype display at Siggraph. In the meantime, it’s worth carving out three and a half minutes of your time to watch the video below, which explains the technology in visual detail.

Western Digital launches 4TB drive, touts digital surveillance uses

Taken from: http://www.extremetech.com/electronics/136969-western-digital-launches-4tb-drive-touts-survelliance

Western Digital is launching its first 4TB drives today, and it’s aiming them at the enterprise storage market. The new drives come in SATA and SAS flavors and chock full of the latest in high-end features, including vibration-tolerant hardware, stabilized motor shafts, pre-emptive wear leveling, and extended burn-in testing. The SAS flavor offers dual port capability and both models are five platter designs packing 800GB per platter while maintaining a 7200 RPM spindle speed.

They’re also apparently really, really good for digital surveillance. The feature image above is drawn from WD’s own prominent advertising on the RE4 series; the company emphasizes that these products (at least, the SATA flavors) are “ideal for servers, storage arrays, video surveillance, and other demanding applications.”
A full comparison of the specifications between the SATA and SAS versions is provided below.


These are some of the first 4TB drives on the market, and while they command a premium (pricing ranges from $459 to $479), even the consumer-oriented HGST 4TB drive sells for $299 at NewEgg. We’ll undoubtedly see 4TB prices fall in the next 12-18 months as higher platter densities are rolled out. Five platters is the typical maximum for drives these days, although HGST’s recent helium HDD announcement is proof that the company plans 6-7 platter drives — though probably not for mainstream consumer shipments.
These drives are marketed for “nearline” applications, which means WD recommends putting them in configurations where large amounts of storage capacity is needed but access times aren’t critical. Nearline drives strike a balance between immediate access (a field increasingly dominated by SSDs) and long-term archival storage, where retrieval can take several minutes. As such, these drives offer high-end features but retain the more pedestrian 7200 RPM speeds rather than the 15,000 RPM that was once a hallmark of the enterprise storage industry.

Thursday, September 27, 2012

The dangerously small iPhone 5 nano-SIM: Trimming, unlocking, and adapters

Taken from: http://www.extremetech.com/electronics/136709-the-dangerously-small-iphone-5-nano-sim-trimming-unlocking-and-adapters


In the last few days, Apple has moved millions of iPhone 5 handsets. Apple didn’t radically alter the look of the new iPhone, but there is something new on the inside: The iPhone 5 is the first phone to adopt the recently approved nano-SIM standard. Apple pushed this design hard, much to the chagrin of other device makers. Now that the nano-SIM is in the wild, there are some things you ought to know about it.

What is a nano-SIM?

In late 2011, the European Telecommunications Standards Institute (ETSI) began sorting through possible designs for a next generation SIM standard, the so-called fourth form factor (4FF). All the big players in the wireless industry listened to what the ESTI wanted in a new, smaller SIM and submitted proposals. Apple was in the thick of things early as it wanted the smallest SIM possible for the iPhone 5.

RIM, Nokia, and Motorola favored a more radical rethinking of the SIM card. Their proposed SIM designs have been technically more advanced, and included the ability to be inserted and ejected with push-push systems. They were also designed to be easily removable with a fingernail — no tray required.

Apple’s winning design is much more an evolution of the micro-SIM. It basically trims off all the excess plastic around the contacts, but keeps the same shape. Because this is essentially just the contacts with no plastic, there is no space for the catch needed for push-push mechanisms. The size of the Apple nano-SIM is a more serious concern. The ETSI originally wanted to make sure that the 4FF standard was shaped in such a way that consumers would not be able to cram it into a micro-SIM slot, making it virtually impossible to remove. The Apple nano-SIM that eventually won out is small enough to do this, and if you turn it sideways, it even looks like it should fit into a micro-SIM slot. It’s exactly what the ETSI didn’t want.

Making your SIM fit in the iPhone 5

In the iPhone 5 , the SIM card you get will be in a tray that you have to pop out with the Apple’s included tool. You could use the SIM that comes with the phone, but what if you want to use your own micro-SIM? As with the mini-SIM before it, you can carefully trim down your SIM card to fit in your new device. This is one of the practical upshots of Apple’s design — it’s backwards compatible.
You can use the iPhone’s SIM tray as a guide for trimming off the excess bits of your larger card. Surprisingly, even older SIM cards with wider gold contacts will work. You can cut into the edge of the contacts without damaging the card. The nano-SIM carriers are providing with the iPhone 5 is a little thinner than micro-SIMs, but the tray is deep enough to accommodate most cards.

Some companies are already making SIM cutters to slice off the excess plastic to make your card into the perfect nano-SIM. Most of these devices cost a few bucks, but you will be assured of not accidentally cutting into your SIM contacts. However, these are still hard to find and scissors are everywhere.

What if you want to take your iPhone 5 nano-SIM and go back to a micro-SIM phone? We already know that you can fit a nano-SIM into a micro slot, basically destroying your phone. What you need is an adapter, which you might as well get if you’re going to buy a SIM card cutter. These are simple plastic inserts that your nano-SIM can be docked into, making it fit into larger micro-SIM slots — kind of like one of those SD cards that can have a micro SD card slotted into them.
If you’re planning to unlock an iPhone 5 for use on other carriers, like T-Mobile in the US, you will have to trim your SIM. Most carriers aren’t offering nano-SIMs just yet, and some international carriers might not have them for months. If nano-SIM doesn’t take off in other devices, you might have to continue cutting down SIMs.

Will high-mileage Nissan Leafs need costly battery replacements soon?

Taken from: http://www.extremetech.com/extreme/136894-will-high-mileage-nissan-leafs-need-costly-battery-replacements-soon

Could the lithium-ion battery pack in the Nissan Leaf suffer old age before you’re ready to dispose of your electric vehicle? Reports suggest high-mileage Leafs can experience a noticeable drop in battery capacity in the first year or so of ownership, which means decreased driving range — and owner satisfaction — in following years.

A new Leaf gets about 85 miles per charge (what is often called “up to 100 miles”). Some Leaf owners in hot-weather climates who drove a lot, nearly 20,000 miles a year, have found they’re only getting 60 or so miles per charge 12-14 months into the cars’ lives. Nissan says the “glide path” for a normal Leaf’s battery degradation is down to 70%-80% capacity after five years and about 70% after 10 years so these batteries may be getting old before their time. The issue is not minor: The manufacturing cost of the Leaf’s battery is around $15,000, so replacing the battery is half the cost of the car.

For Nissan and its reputation, the stakes are also high if it can’t satisfy the squeaky owners who’ve found their way online to tell their side.
Nissan Leaf charging
As the Nissan Leaf came to market in late 2010, Nissan projected a range of about 100 miles for the cars and their 24kWh lithium-ion battery packs. Nissan modified the range to around 85 miles and cited absolute worst case-best case range of 50 to 130 miles. The EPA set it at 73 miles. In a recent test by Phoenix-area owners of a group of year-old Leafs, some got as little as 60 miles on a full battery charge. They’ve been worried because 12 to 14 months into their cars lives, they’re seeing one or two segments of the 12-segment charge indicators not light up, even after a full charge.
The Leaf manual says the first missing bar represents a 15% falloff in capacity; additional bars represent half that. These are not normal cars though, Nissan notes. The units have an average of more than 19,000 miles on them versus an expected norm of 12,500 miles per year and they’re in a warm region that had an unseasonably hot summer. Still, Nissan this week said it will launch an investigation to determine what’s up.

The battery life “glide path,” as Nissan puts it, is for the big Li-Ion pack to retain 70%-80% of its charging and storage capabilities after five years of life and about 70% after 10 years. The Phoenix owners believe the downward slope is a lot steeper, sooner. In a recent owner range test this month of a half-dozen Leafs in Phoenix, some got as little as 60 miles of range on fully charged batteries, according to InsideEVS.com. The test was run with air conditioning off (good for the battery if not the occupants) but much of the the test route was at highway speed (less good for battery life because there’s little chance for regeneration).

Sunday, September 23, 2012

Researchers create single-atom silicon-based quantum computer

Taken from: http://www.extremetech.com/extreme/136614-researchers-create-single-atom-silicon-based-quantum-computer


A team of Australian engineers is claiming it has made the first working quantum bit (qubit) fashioned out of a single phosphorous atom, embedded on a conventional silicon chip.
This breakthrough stems all the way back to 1998, when Bruce Kane — then a University of New South Wales (UNSW) professor — published a research paper on the possibility of phosphorous atoms, suspended in ultra-pure silicon, being used as qubits. For 14 years, UNSW has been working on the approach — and today, it has finally turned theory into practice.
To create this quantum computer chip, the Australian engineers created a silicon transistor so small that “electrons have to travel along it one after the other.” A single phosphorous atom is then implanted into the silicon substrate, right next to the transistor. The transistor only allows electricity to flow through it if one electron from the phosphorus atom jumps to an “island” in the middle of the transistor. This is the key point: by controlling the phosphorus’s electrons, the engineers can control the flow of electricity across the transistor.
At this point, I would strongly recommend that you watch this excellent video that walks you through UNSW’s landmark discovery — but if you can’t watch it, just carry on reading.


To control the phosphorus atom’s electrons, you must change their spin, which in this case is done by a small burst of microwave radiation. In essence, when the phosphorus atom is in its base state, the transistor is off; it has a value of 0 — but when a small burst of radiation is applied, the electrons change orientation, one of them pops into the transistor, it turns on; it has a value of 1. For more on electron spin and how it might impact computing, read our spintronics and straintronics explainer.
An Australian chip lab, making a quantum device
Now, we’ve written about quantum computers before — the University of Southern California has created a quantum computer inside a diamond, for example — but the key breakthrough here is that UNSW’s quantum transistor has been fashioned using conventional silicon processes. Rather than beating its own path, UNSW is effectively riding on the back of 60 years and trillions of dollars of silicon-based electronics R&D, which makes this a much more exciting prospect than usual. It is now quite reasonable to believe that there will be readily available, commercial quantum computers in the next few years.

Saturday, September 22, 2012

First air-to-ground quantum network created, transmits quantum crypto keys

Taken from: http://www.extremetech.com/extreme/136312-first-air-to-ground-quantum-network-created-transmits-quantum-crypto-keys


A team of quantum engineers in Germany have created the first air-to-surface quantum network, between a base station and an airplane flying 20 kilometers (12.4 miles) above. This is all that is needed for governments to create quantum-secured battlefield or surveillance networks — and a very tantalizing step towards a global quantum communications network.

The researchers, led by Sebastian Nauerth of the Ludwig Maximilian University, performed the experiment at an airport near Munich using a specially-equipped plane. The airplane is outfitted with a a photon source (a laser), and a system that can alter the spin (polarization) of the photons very exactly to encode data using the BB84 protocol. BB84 is the first protocol devised for quantum key distribution (created way back in 1984 by Charles Bennett and Gilles Brassard), for the purpose of quantum cryptography. In essence, BB84 encodes digital bits as polarized photons (i.e. qubits).


Once the plane is aloft, the base station (a telescope) tracks the plane using a motorized mirror, which is quite difficult as the plane is moving at 300 kmh (200 mph) and is 20 kilometers up in the air. The telescope picks up the transmitted photons, bounces them through a few more mirrors (the green path in the image below), and then uses a very sensitive photodetector to turn them into qubits.

All told, the plane/base station were able to maintain a stable link for 10 minutes, transmitting 145 qubits per second, with a quantum bit error rate (QBER) of 4.8%. This might seem like a small amount of data, but it’s more than enough to securely transmit an encryption key that can then be used to encrypt normal data that’s sent over standard networks. Key exchange has always been one of cryptography’s biggest weaknesses — but quantum key exchange is intrinsically secure, as observing the qubits during transmission instantly nullifies the data (and alerts the receiver that someone is listening in).

Interestingly, the experiment was performed just after sunset, to minimize the interference of sunlight. While the research paper [PDF], presented at the QCrypt convention last week, doesn’t explicitly mention if this technique would work during daylight hours, it would probably just be a matter of using a more powerful laser.

Sunday, September 16, 2012

Intel predicts ubiquitous, almost-zero-energy computing by 2020

Taken from: http://www.extremetech.com/computing/136043-intel-predicts-ubiquitous-almost-zero-energy-computing-by-2020

Intel often uses the Intel Developer Forum (IDF) as a platform to discuss its long-term vision for computing as well as more practical business initiatives. This year, the company has discussed the shrinking energy cost of computation as well as a point when it believes the energy required for “meaningful compute” will approach zero and become ubiquitous by the year 2020. The company didn’t precisely define “meaningful compute,” but I think in this case we can assign a solid working definition. Adding two integers together is computing, but it isn’t particularly meaningful. Accurately measuring geospatial location via GPS, making a phone call, or playing a game is meaningful.

The idea that we could push the energy cost of computing down to nearly immeasurable levels is exciting. It’s the type of innovation that’s needed to drive products like Google Glass or VR headsets like the Oculus Rift. Unfortunately, Intel’s slide neatly sidesteps the greatest problems facing such innovations — the cost of computing already accounts for less than half the total energy expenditure of a smartphone or other handheld device. Some of the recent trends in smartphones, like the push for high-quality Retina displays and LTE connectivity, have significantly increased device power consumption. Smaller CPUs and more power-efficient components have been offset by higher storage capacities and additional RAM.



Can Intel build small compute engines with a near-zero cost of calculation by 2020? Maybe it can. But the real question is whether Intel, or other manufacturers, can manufacture the touch screens, displays, radios, speakers, cameras, and audio processors that would go into such devices to drive the ubiquitious computing revolution. Lithium-air batteries may eventually be capable of replacing today’s current lithium-ion designs, but commercial Li-air is thought to be at least 10 years away.

This doesn’t mean technology won’t advance, but it suggests a more deliberate, incremental pace as opposed to an upcoming revolution. Smartphones of 2018-2020 may be superior to top-end devices of the present day in much the same way that modern computers are more powerful than desktops from the 2006 era. Modern rigs have significant advantages — but 2006 hardware is still quite serviceable in a variety of environments. The early years of the smartphone revolution were marked by enormous leaps forward from year to year, but we may already be reaching the end of that quick advance phase.

Saturday, September 15, 2012

Air travel in 2050: Autonomous planes flying in geese-like formations

Taken from: http://www.extremetech.com/extreme/135774-air-travel-in-2050-autonomous-planes-flying-in-geese-like-formations


One day in the not-so-distant future, semi-autonomous airplanes will fly through the skies like flocks of geese, reducing drag and saving fuel. This concept is part of a broader plan by airplane maker Airbus to modernize the industry over the next three decades.

“Our engineers are continuously encouraged to think widely and come up with ‘disruptive’ ideas which will assist our industry in meeting the 2050 targets we have signed up to,” Airbus’ engineering chief Charles Champion says. “These and the other tough environmental targets will only be met by a combination of investment in smarter aircraft design and optimising the environment in which the aircraft operates.”

While planes presently fly with about 1,000 feet (300m) of vertical separation between them, computer-controlled planes could fly much closer. The planes’ flight control systems could also ingest weather and atmospheric conditions and adjust to constantly fly the best available route, while at the same time maintaining a safe distance from other nearby planes.

The pilot is still in control of the aircraft, but the enhanced computer system will act as a backup, smoothing out the natural pilot error. Flying in such close formation obviously requires precision, far more than even the most experienced pilot can provide. Accidents and issues do happen, so Airbus sees it as important for the pilot to still have control. The goal here is efficiency, and the company’s research shows that’s exactly what such a system provides.

On average, thanks to reduced air resistance — just like a flock of geese flying in a V formation — flights will be about 13 minutes shorter, and about 9 million tons of aircraft fuel will be saved. Billions of dollars a year will be saved in fuel costs, which hopefully would have the effect of making air travel cheaper too. There’s also an environmental benefit here: 28 million less tons of CO2 emissions would be put into our atmosphere each year.

In addition to computer controlled flying, Airbus proposes a superfast ground vehicle to launch aircraft into the sky, and free glides into airports. In both cases, this would require much less runway, which means airports could be constructed in highly-populated areas where land comes at a premium. Another time and fuel saving measure is optimizing the taxiing of planes around the airport, by using robotic taxi vehicles powered on renewable energy.

Let there be color! Enchroma creates special sunglasses to help the color blind

Taken from: http://www.extremetech.com/extreme/135953-let-there-be-color-enchroma-designs-special-sunglasses-to-help-the-color-blind



A company called Enchroma has developed a pair of sunglasses intended to help the color blind with these and other tasks. Before we dive into the technology involved, let’s clear something up. People who are color blind aren’t actually blind in any way, nor do they see the world in black and white. Color blindness is a deficiency in the way you see color, with red-green color deficiency being the most common form of color blindness.

The way Enchroma approached the problem is by applying a special optical coating to its lenses, which allows them to filter light reaching the eye. Traditional sunglasses reduce the transmission of light across the entire spectrum, whereas Enchroma’s eye gear selectively filters light in order to enhance the color effect. In this way, Enchroma considers them to be “smart sunglasses.”

There’s a lot of math and science involved in what Enchroma is doing. The company’s patent-pending coatings are designed using a proprietary computer-based optimization method derived from a mathematical model of the human visual system.


Retinal cones perceive color in light and transmit that data to the optic nerve. The most common type of color blindness is deuteranomaly, in which the green-sensitive cones (M-cones) have decreased sensitivity. Protanomaly is a more mild color vision defect that affects red retinal receptors (L-cones).

For individuals who have dueteranomaly, M-cone sensitivity is shifted towards longer wavelengths, whereas for individuals with protanomaly, the L-cone sensitivity shifts towards shorter wavelengths. Enchroma’s specially designed filters selectively blocks the wavelengths of light that wreak havoc on a person’s ability to see certain colors.

Enchroma’s first products will be standard Cx Series lenses, available with or without a prescription, which are intended to enhance color for individuals with normal color vision. Enchroma will also offer two other pairs – Cx-D for individuals with deuteranomaly and Cx-PT for those with protanomaly.


Sunday, September 9, 2012

FBI launches $1 billion nationwide facial recognition system

Taken from: http://www.extremetech.com/extreme/135665-fbi-launches-1-billion-nationwide-facial-recognition-system


The US Federal Bureau of Investigation has begun rolling out its new $1 billion biometric Next Generation Identification (NGI) system. In essence, NGI is a nationwide database of mugshots, iris scans, DNA records, voice samples, and other biometrics, that will help the FBI identify and catch criminals — but it is how this biometric data is captured, through a nationwide network of cameras and photo databases, that is raising the eyebrows of privacy advocates.


Until now, the FBI relied on IAFIS, a national fingerprint database that has long been due an overhaul. Over the last few months, the FBI has been pilot testing a facial recognition — and soon, detectives will also be able to search the system for other biometrics such as DNA records and iris scans. In theory, this should result in much faster positive identifications of criminals and fewer unsolved cases.



According to New Scientist, facial recognition systems have reached the point where they can match a single face from a pool of 1.6 million mugshots/passport photos with 92% accuracy. In the case of automated, biometric border controls where your face and corresponding mugshot are well lit, the accuracy approaches 100%. Likewise, where DNA or iris records exist, it’s a very expedient way of accurately identifying suspects.

So far, so good — catching criminals faster and making less false arrests must be a good thing, right? Well, yes, but there are some important caveats that we must bear in mind. For a start, the pilot study has only used mugshots and driving license photos of known criminals — but the FBI hasn’t guaranteed that this will always be the case. There may come a time when the NGI is filled with as many photos as possible, from as many sources as possible, of as many people as possible — criminal or otherwise. This might be as overt as parsing CCTV footage and collating every single face into a database; or maybe you’re just unlucky and your face ends up in the system because you’re in the background of a photo starring a known criminal.

Big Data in Your Blood

Taken from: http://bits.blogs.nytimes.com/2012/09/07/big-data-in-your-blood/


Very soon, we will see inside ourselves like never before, with wearable, even internal , sensors that monitor even our most intimate biological processes. It is likely to happen even before we figure out the etiquette and laws around sharing this knowledge.Already products like the Nike+ FuelBand and the Fitbit wireless monitor track our daily activity, taking note of our steps and calories burned. The idea is to help meet an exercise regimen, perhaps lose some weight. The real-world results are uneven. For sure, though, people are building up big individual databases about themselves over increasingly long periods of time. So are the companies that sell these products, which store that data.


That is barely the start. Later this year, a Boston-based company called MC10 will offer the first of several “stretchable electronics” products that can be put on things like shirts and shoes, worn as temporary tattoos or installed in the body. These will be capable of measuring not just heart rate, the company says, but brain activity, body temperature and hydration levels. Another company, called Proteus, will begin a pilot program in Britain for a “Digital Health Feedback System” that combines both wearable technologies and microchips the size of a sand grain that ride a pill right through you. Powered by your stomach fluids, it emits a signal picked up by an external sensor, capturing vital data. Another firm, Sano Intelligence, is looking at micro needle sensors on skin patches as a way of deriving continuous information about the bloodstream.



There are also movements to use this data in entirely new ways, for patient-generated medical research. Linda Avey, who co-founded the personal genetics company 23andMe is now working on a start-up called Curious, which should be live by the middle of next year. Her idea is to get people with difficult to pin down conditions like chronic fatigue, lupus or fibromyalgia to share information about themselves. This could include the biological data from devices, but also things like how well they slept, what they ate and when they got pain. Collectively, this could lead to evidence about how behavior and biology conjure these states.


Friday, September 7, 2012

Amazon Updates Its Kindle Line of E-Readers

Taken from: http://www.nytimes.com/2012/09/07/technology/amazon-updates-its-kindle-line-of-e-readers.html?ref=technology


Amazon announced a barrage of new tablets and e-readers on Thursday that will definitely compare  to Apple’s iPad a little more serious. Amazon updates its line of Kindle e-readers, including the Kindle Fire HD, a tablet computer that comes in two sizes, one that is nearly as large as the iPad and that undercuts its price by $200. The company also announced the Kindle Paperwhite, a new version of the black-and-white Kindle. It is thinner and turns pages 15 percent quicker than its predecessor. It also has a new high-contrast screen that Amazon says will be easier to read, especially in the dark because it is lighted from the bottom.

Jeff Bezos, Amazon’s chief, with the two new Kindle Fire HDs.
 

The Kindle Fire HD challenges the iPad on several fronts. First, the larger version of the device has an 8.9-inch display, compared with the iPad’s 9.7 inches. Second, the new Amazon device also has a front-facing camera that works with the built-in Skype video conferencing software, competing directly with the front-facing camera on the iPad and Apple’s FaceTime video conferencing features. Furthermore, like the iPad, the new Kindle Fire offers 16 gigabytes of storage.  And last but not least, the lower price endows Amazon Kindle Line with advantage. The larger version of the Kindle Fire HD costs $300; the baseline iPad costs $500. (Apple sells an older model for $400.) Amazon is also offering a $500 version of the Kindle Fire HD with cellular data connectivity, which is cheaper than Apple’s least expensive iPad with cellular connectivity, which costs $630.

  The larger version of the Kindle Fire HD has an 8.9-inch screen.
 

Amazon's new invention also have some particular characteristics.One innovation of Amazon’s is that using its X-ray feature for movies played on the Kindle Fire HD, viewers will be able to click on an actor in a movie and find out more about him or her using the IMDb.com movie and TV database, which Amazon owns.        

The approach of running business between Amozon and Apple is quite different. Amazon, an online retailer, still makes its money selling the content; Apple profits on its devices. Amazon’s services are the core of its devices, and the devices enhance Amazon’s service. If this innovation is a successful step, more market share will be gained.

Can Apple always stand in a leading place? I think it depands. We cannot say Apple leads the world. We should say the new innovation, the good idea leads the world. Who can be more innovative, who can go further.
    

            

Sunday, September 2, 2012

IFA: Move over 3D, it’s time for 4K UHDTV

Taken from: http://www.extremetech.com/electronics/135327-ifa-move-over-3d-its-time-for-4k-uhdtv



After five years of trying to convince us that 3D TVs are the future, it seems TV makers are finally ready to move on — to 4K UHDTV. At the IFA consumer electronics show in Berlin, Sony, Toshiba, and LG are all showing off 84-inch 4K (3840×2160) TVs. These aren’t just vaporware, either: LG’s TV is on sale now in Korea (and later this month in the US), Sony’s is due later this year, and Toshiba will follow in the new year.

LG

LG actually debuted its 4K TV back at CES in January, but it’s back at IFA with a launch date (September), a price ($22,000), a model number (84LM9600), and this time the company is actually letting people play with the set. Beyond its size and resolution, there’s plenty of connectivity down the side (HDMI and USB ports up the wazoo), passive 3D (and 2D-to-3D conversion), built-in WiFi, and a slew of other top-end features.
In general, consumers and reporters at IFA all seem to say the same thing about LG’s 84-inch TV: It only really comes into its own when you get really close — close enough that all you can see is the TV (about five feet). Remember, despite having 3840×2160 (8.2 million pixels) — four times the resolution of 1920×1080 — the pixel density is still very low (54 PPI, vs. the 200-300 PPI found on modern mobile displays). An 84-inch 4K TV only has a slightly higher pixel density than a 50-inch 1080p TV (44 PPI).
Curiously, a few people are reporting that the TV seems to have very poor horizontal viewing angles (and the LG site doesn’t even list the viewing angles, which is usually a bad sign).


Sony

Sony, never one to be out done on features, has decided that its 84-inch 4K UHDTV will debut with a built-in 10-speaker 50-watt sound system, built-in WiFi, and Sony’s Entertainment Network, which provides access to Netflix, Pandora, YouTube, Skype, and other web services. The whole thing weighs a mind-blowing 176 pounds (80 kilos).
Like LG, Sony’s XBR-84X900 (Sony sure loves its memorable model numbers) supports passive 3D at 4K resolutions, and for PlayStation 3 owners there’s SimulView, which allows two gamers to play a game at 1080p without split screen (using polarized glasses).
There’s no word on pricing — but it’ll probably be at least $25,000 when it launches “some time this year.”



Toshiba

The Toshiba 84-inch 4K TV, with an iPhone next to it for scale (Credit: The Verge)
The Toshiba 84-inch 4K TV, with an iPhone next to it for scale (Credit: The Verge)

We don’t know much about Toshiba’s 84-inch 4K display, other than the fact that it’s coming some time in 2013. Judging by the photos, Toshiba’s unit is sleeker than Sony’s TV, but not quite as svelte as LG’s. There aren’t any built-in speakers — but really, if you’re going to spend $20k on a TV, does Sony really think that you won’t also have a proper cinema-grade surround sound setup?

Our best bet is assume that the 84-inch model has the same features as Toshiba’s smaller, already-launched 55-inch 4K TV. The 55ZL2 supports glasses-free 3D through lenticular lenses, which direct redirect 3D imagery to different locations (i.e. different seats on the sofa). The 55ZL2 also has the ability to play video from online sources, but most reviews suggest that Toshiba’s offering pales in comparison to Sony’s, or indeed a $99 media streamer.

Perhaps most worryingly, the 55ZL2 only accepts 4K video input through Toshiba’s proprietary “digital serial port” — and the only device that outputs to a digital serial port is Toshiba’s own professional, very expensive media servers. Hopefully the 84-inch model will accept 4K over HDMI, like the Sony and LG UHDTVs.


Finally, a friendly reminder: While a 4K monitor or TV sounds like a good idea, bear in mind that there’s almost zero 4K content on the market — and short of spending a thousand bucks on a monstrous video card setup, nothing that will even come close to rendering a game at 3840×2160. There isn’t a 4K Blu-ray standard, and 4K broadcast TV transmission is still very much in its infancy.

Saturday, September 1, 2012

CSI-style super-resolution image enlargement? Yeeaaaah!

Taken from: http://www.extremetech.com/extreme/132950-csi-style-super-resolution-image-enlargment-yeeaaaah

Believe it or not, there is actually a grain of truth to the software employed by geeky technicians in TV shows and movies that can seemingly reconstruct a high-resolution crime scene from a woefully pixelated source image or video. There is actually a way of, with software, of increasing the image quality of enlarged images.

The technique is called super-resolution, and there are two basic approaches. The first approach takes a bunch of similar images of the same object, and then uses an algorithm to create a single image with the best/sharpest bits from each. The second approach is slightly more magical. In any given image, the same pattern of pixels usually appears multiple times — tiles on a floor, bricks on a wall, wrinkles on a face, spots on a butterfly. In each case, though, because we live in a 3D world, these patterns are slightly different sizes, and each pattern has a slightly different subpixel shift. If you group together enough of these pixel patterns, and take the best subpixels from each, you can work out how that pattern actually looks in reality.

In short, it’s possible to take a blurry or low-resolution image, and gain image quality by enlarging it with super-resolution techniques. As you can see above, and in the examples below, super-resolution can produce some startling results.







These images, which were created using a mix of both super-resolution approaches, come from a Weizmann Institute of Science research paper titled “Super Resolution From a Single Image.” Rather than layering together multiple low-resolution images, the Weizmann technique basically involves turning a single image into lots of tiny images (say, 5×5 pixels each), and then comparing each of these blocks to see if there are any matches. If any matches are made, they can then be combined to create a sharper version. The process isn’t perfect and can create artifacts (check the last line of the eye chart), but in almost every case it can tease a little more detail out of an enlarged image.

The two main uses of super-resolution are obvious — commercial enlargement of images, and crime fightin’ — but a third option, compression, might prove to be an even better use. For example, you can use JPEG compression to turn a 100KB image into a 20KB image without much loss of detail. But imagine if you applied compression and reduced the image’s dimensions, and then used super-resolution to display the image. We could be talking about a very efficient way of reducing our smartphone traffic bills, or bridging the gap between normal and Retina displays.

The only real problem with super-resolution is that it’s computationally expensive. In the Weizmann Institute research paper, there isn’t a single mention of just how long it takes to create each super-resolution image, which suggests that the algorithm is very slow. Some research groups have reported that real-time super-resolution is possible with GPU acceleration, though. It’s also worth pointing out that super-resolution isn’t always the best solution for enlargement: in the case of line art or old-school computer game emulation, a vectorizing algorithm might be a better choice.


Samsung unveils the Galaxy Camera: Does Android belong in your point-and-shoot?

Taken from: http://www.extremetech.com/electronics/135231-samsung-galaxy-camera-does-android-belong-in-your-point-and-shoot

Today Samsung joined Nikon in announcing an Android-powered camera. The Samsung Galaxy Camera weighs 305g, features a 16-megapixel CMOS sensor, 21x super zoom lens, a quad-core 1.4GHz SoC (probably Exynos 4), 8GB of internal storage, and runs Android 4.1 Jelly Bean. This compares with the Nikon S800c which also has a 16MP CMOS sensor, along with a 7x zoom f/2 lens and runs Android 2.3 Gingerbread. Since neither unit has shipped, we don’t know anything yet about how good they are as cameras, but we do know that the companies are trying to regain some of the ground they’ve lost to smartphones by integrating sharing right into their cameras.

Samsung promotes the camera’s Smart Pro presets as a quick way to capture the “perfect photo”.The Galaxy Camera’s display is an eye-opening 4.8-inch HD Super Clear LCD screen — much larger than the 3.5-inch on Nikon’s model — and Samsung has augmented its S Voice application with camera control commands such as “Zoom in” and “Shoot.”

Cloud-friendly camera

Unlike Nikon’s S800C, which is limited to WiFi connectivity, the Galaxy Camera is available with either 3G or 4G along with WiFi, although details weren’t available on how data plans would work. Samsung has announced that the camera has an Auto Cloud Backup feature which can save photos as they are taken through its AllShare service.




 Clearly both of these cameras are designed to play catch up with smartphones in the race to be the photo sharing device of choice. Photos have always been taken to be shared, except now that’s usually through an instant upload to services like Facebook, Pinterest, or Instagram, not by laboriously printing out snapshots and showing them around. This has meant an irreversible trend towards the smartphone as the primary camera for most new photographers, and many experienced ones.

Is this the beginning of the super camera?

For photographers, there are a couple of critical questions about these new models. First is whether these cameras will have enough additional functionality to justify the added cost and weight when most people already have a serviceable camera in their phone. Second, and more importantly, there is still a big question mark hanging over Nikon and Samsung’s long-term intentions for Android. If Android cameras are just standard point-and-shoots with a smartphone OS bolted on for sharing, that’ll be a wasted opportunity. It would have been easier to create a camera that instantly tethered to a smartphone instead, and let the phone do all the work. There is an exciting possibility, if Nikon and Samsung do this correctly, to really unleash the power of Android to enable new photographic solutions.