Taken from: http://allthingsd.com/20121019/google-wallet-exec-no-surprise-digital-payments-are-slow-going/?refcat=mobile
It took 50 years for the credit card to become the dominant means of payment,
so it shouldn’t be surprising that mobile payments haven’t immediately taken
off.
Everyone is expecting change to happen in weeks or months, but it will take
time, says Osama Bedier, Google’s VP of Wallet and payments. “We will have
mobile payments,” Bedier said, speaking at the Global Mobile Internet Conference
in San Jose on Friday.
There’s room for more than one player, Bedier said, but each has to solve an
issue. “There’s a lot of ideas and not a lot of problems being solved,” Bedier
said. “Credit cards already work pretty well if all you have to do is
payments.”
Ultimately, the former PayPal exec said that mobile payments have to either
save time, save money or both. Technology can do that, he added. On the
technology front, Bedier said he remains a believer that near field
communication technology (NFC) will be ubiquitous on both phones and payment
terminals within five years. NFC will also find its way into many other places
in the logistics chain. “NFC chips will replace bar codes,” Bedier said.
But many believe NFC will take a year or more to take off, especially now
that Apple declined to embed the chips into its latest release, the iPhone 5.
Other payment companies, like PayPal, have
decided to find other avenues to enabling digital payments without it, and
companies like Starbucks are relying on something as simple as a barcode.
Bedier said at least half of transactions will be mobile within five years,
but remained short on details on how much volume Google Wallet is doing today.
“The numbers are compelling,” he said, without revealing any of those compelling
numbers.
A rival executive from Scvngr, which runs a payments service called LevelUp,
recently tweeted somewhat hyperbolically that Google Wallet has five users.
“We have a lot more than five users,” Bedier said, though he wouldn’t say how
many customers they have. He did say that the company doubled its transaction
volume in the first few weeks after
transitioning its Wallet transactions to the cloud back in August. Still,
the company faces some obvious adoption hurdles because today Google Wallet is
only available on NFC-capable Android phones through one U.S. carrier:
Sprint.
“We’re seeing that trajectory continuing,” he said.
The other three carriers — AT&T, Verizon Wireless and T-Mobile — are
backing ISIS, which is launching next week. Bedier acknowledged the lack of
support from carriers for Google Wallet. “We haven’t yet seen eye to eye on a
mobile wallet solution,” he said. “So far, they have said they want to do their
own thing and we respect that.”
Saturday, October 20, 2012
The Pirate Bay moves to the cloud to evade the police
Taken from: http://www.extremetech.com/computing/138037-the-pirate-bay-moves-to-the-cloud-to-evade-the-police
In a tenacious move that should finally make The Pirate Bay “raid proof,” the world’s largest torrent site has moved… to the cloud!
First The Pirate Bay got rid of its trackers. Then it stopped hosting torrent files. And now, in the words of TPB’s Winston Brahma, “we’ve gotten rid of the servers.” He nebulously continues: “Slowly and steadily we are getting rid of our earthly form and ascending into the next stage, the cloud.”
In short, The Pirate Bay website (the search engine that you use to look for torrent magnet links) is no longer hosted on hardware owned and managed by the TPB admins. Instead, the website is now hosted by multiple cloud companies in two separate countries. There are still two important pieces of TPB-owned hardware, however: A load balancer that encrypts requests before passing them along to the cloud instances, ensuring that the cloud providers themselves don’t know the identity of users accessing The Pirate Bay, and a transit router.
In theory, if TPB was raided by the police today, all they would find is a transit router. If they followed the data trail they would come across the load balancer, which in this case is a diskless server “with all the configuration stored in RAM.” (The theory here is that, if the police turn the server off, they will lose the IP addresses of the cloud instances where the TPB website is actually stored).
If the police finally work their way back to the cloud hosts, all they will find are encrypted virtual machine images (and hopefully no logs). There are backups of these disk images, of course, which can be deployed to other cloud hosts with relative ease. Perhaps more importantly, if these servers lose contact with the load balancer for more than eight hours, they automatically shut down, eliminating any chance of the police accessing their data.
On a practical level, though, there are still plenty of ways to shut down The Pirate Bay. The load balancer will be hosted with a major ISP — and it’s easy enough for the feds to compel an ISP to pull the plug. The site would then be down until TPB can install another load balancer — which probably isn’t cheap or easy. Moving to the cloud does nothing to prevent DNS blackholing, too — though TPB’s move to embrace IPv6 definitely helps in that regard. Realistically, if we cut through all the hyperbole, the real advantages of moving to the cloud are less down time, and less expenditure for the TPB admins.
Personally, I’m glad to see that The Pirate Bay settled on cloud hosting. For a while it was considering a fleet of low-orbit server drones, which was a little bit on the crazy side.
In a tenacious move that should finally make The Pirate Bay “raid proof,” the world’s largest torrent site has moved… to the cloud!
First The Pirate Bay got rid of its trackers. Then it stopped hosting torrent files. And now, in the words of TPB’s Winston Brahma, “we’ve gotten rid of the servers.” He nebulously continues: “Slowly and steadily we are getting rid of our earthly form and ascending into the next stage, the cloud.”
In short, The Pirate Bay website (the search engine that you use to look for torrent magnet links) is no longer hosted on hardware owned and managed by the TPB admins. Instead, the website is now hosted by multiple cloud companies in two separate countries. There are still two important pieces of TPB-owned hardware, however: A load balancer that encrypts requests before passing them along to the cloud instances, ensuring that the cloud providers themselves don’t know the identity of users accessing The Pirate Bay, and a transit router.
In theory, if TPB was raided by the police today, all they would find is a transit router. If they followed the data trail they would come across the load balancer, which in this case is a diskless server “with all the configuration stored in RAM.” (The theory here is that, if the police turn the server off, they will lose the IP addresses of the cloud instances where the TPB website is actually stored).
If the police finally work their way back to the cloud hosts, all they will find are encrypted virtual machine images (and hopefully no logs). There are backups of these disk images, of course, which can be deployed to other cloud hosts with relative ease. Perhaps more importantly, if these servers lose contact with the load balancer for more than eight hours, they automatically shut down, eliminating any chance of the police accessing their data.
On a practical level, though, there are still plenty of ways to shut down The Pirate Bay. The load balancer will be hosted with a major ISP — and it’s easy enough for the feds to compel an ISP to pull the plug. The site would then be down until TPB can install another load balancer — which probably isn’t cheap or easy. Moving to the cloud does nothing to prevent DNS blackholing, too — though TPB’s move to embrace IPv6 definitely helps in that regard. Realistically, if we cut through all the hyperbole, the real advantages of moving to the cloud are less down time, and less expenditure for the TPB admins.
Personally, I’m glad to see that The Pirate Bay settled on cloud hosting. For a while it was considering a fleet of low-orbit server drones, which was a little bit on the crazy side.
Wednesday, October 17, 2012
Barbican's Rain Room: it's raining, but you won't get wet
Taken from: http://www.barbican.org.uk/news/artformnews/art/visual-art-2012-random-internati
Have you ever been caught in a terrible downpour and wished you could make it stop? Do you want to have the power to control the weather? Now, your dream can come true.
Rain Room, a new 3D exhibition at London's Barbican Centre marries art, science, and technology to do just that. It was designed by studio Random International and has been premiere at The Curve, Barbican on Oct. 4th 2012. The Rain Room is 100 square metre field of falling water.
When you entre the rain room, you can hear the sound of water and feel the moisture in the air while discovering thousands of falling drops that will respong to your presence and movement. Sensors detect where visitors are standing and the water will halt above the head. So despite standing in a space filled with falling rain, visitors remain dry.
This is the application of 3D sensory cameras fixed to teh ceiling of the Rain Room. Every person who walks into the room is recognized. As you move around "slowly", the rain stops overhead.More than just a technical work, Rain Room is about people. Described as a "social experiment" by one of the artists who created it, Rain Room is a terrific way to see different reactions and interactions.
This experience will bring you a feeling of being peaceful. It is quite different from standing in the rain with an umbrella. Since you don't hear the sound of the rain battering on the umbrella.
Rain room at The Curve runs until March next year. So if you get a chance, don't miss it, to be close to the rain.
Have you ever been caught in a terrible downpour and wished you could make it stop? Do you want to have the power to control the weather? Now, your dream can come true.
Rain Room, a new 3D exhibition at London's Barbican Centre marries art, science, and technology to do just that. It was designed by studio Random International and has been premiere at The Curve, Barbican on Oct. 4th 2012. The Rain Room is 100 square metre field of falling water.
When you entre the rain room, you can hear the sound of water and feel the moisture in the air while discovering thousands of falling drops that will respong to your presence and movement. Sensors detect where visitors are standing and the water will halt above the head. So despite standing in a space filled with falling rain, visitors remain dry.
This is the application of 3D sensory cameras fixed to teh ceiling of the Rain Room. Every person who walks into the room is recognized. As you move around "slowly", the rain stops overhead.More than just a technical work, Rain Room is about people. Described as a "social experiment" by one of the artists who created it, Rain Room is a terrific way to see different reactions and interactions.
This experience will bring you a feeling of being peaceful. It is quite different from standing in the rain with an umbrella. Since you don't hear the sound of the rain battering on the umbrella.
Rain room at The Curve runs until March next year. So if you get a chance, don't miss it, to be close to the rain.
Monday, October 15, 2012
MIT creates carbon nanotube pencil, doodles some electronic circuits
Taken from: http://www.extremetech.com/extreme/137555-mit-creates-carbon-nanotube-pencil-doodles-some-electronic-circuits
A team of MIT chemists have created a carbon nanotube “lead” that can be used to draw freehand electronic circuits using a standard, mechanical pencil.
In a normal pencil, the lead is usually fashioned out of graphite and a clay binder. Graphite, as you may already know, is a form of carbon that is made up of layer after layer of the wonder material graphene. When you write or draw with a graphite pencil, a mixture of tiny graphene flakes and clay are deposited on the paper, creating a mark. (Incidentally, pencil leads never contained lead; it’s just that when graphite was first used in the 1500s, they thought it was lead ore, and the name stuck).
With MIT’s carbon nanotube pencil, the lead is formed by compressing single-walled carbon nanotubes (SWCNT), until you have a substance that looks and behaves very similarly to graphite. The difference, though, is that drawing with MIT’s pencil actually deposits whole carbon nanotubes on paper — and carbon nanotubes have some rather exciting properties.
In this case, MIT is utilizing the fact that SWCNTs are very electrically conductive — and that this conductivity can be massively altered by the introduction of just a few other atoms, namely ammonia.
In the picture above, electricity is applied to the gold electrodes (which are imprinted in the paper). The carbon nanotube pencil is used to fill in the gaps, and effectively acts as a resistor. When ammonia gas is present, the conductivity of the nanotubes decreases, and thus resistance increases — which can be easily measured. Carbon nanotubes are so sensitive that MIT’s hand-drawn sensor can detect concentrations of ammonia as low as 0.5 parts per million (ppm).
There are two main takeaways here. The first is that MIT has found a form of carbon nanotubes that is stable, safe, and cheap to produce. Second, carbon nanotubes have been used in sensors before, but usually the process involves dissolving SWCNTs in solvents, which can be dangerous. Here, creating a carbon nanotube sensor is as simple as drawing on a piece of paper — either by a human, or an automated process.
The team will now work on other carbon nanotube leads that can be used to detect other gases, such as ethylene (produced by fruit as it ripens) and sulfur (for detecting natural gas leaks). It’s also worth noting that the research was partly funded by the US Army/MIT Institute for Soldier Nanotechnologies — so it wouldn’t be surprising if military personnel are eventually outfitted with these sensors… or perhaps their very own carbon nanotube pencil, for MacGyver-like sensor fabrication in the field.
A team of MIT chemists have created a carbon nanotube “lead” that can be used to draw freehand electronic circuits using a standard, mechanical pencil.
In a normal pencil, the lead is usually fashioned out of graphite and a clay binder. Graphite, as you may already know, is a form of carbon that is made up of layer after layer of the wonder material graphene. When you write or draw with a graphite pencil, a mixture of tiny graphene flakes and clay are deposited on the paper, creating a mark. (Incidentally, pencil leads never contained lead; it’s just that when graphite was first used in the 1500s, they thought it was lead ore, and the name stuck).
With MIT’s carbon nanotube pencil, the lead is formed by compressing single-walled carbon nanotubes (SWCNT), until you have a substance that looks and behaves very similarly to graphite. The difference, though, is that drawing with MIT’s pencil actually deposits whole carbon nanotubes on paper — and carbon nanotubes have some rather exciting properties.
In this case, MIT is utilizing the fact that SWCNTs are very electrically conductive — and that this conductivity can be massively altered by the introduction of just a few other atoms, namely ammonia.
In the picture above, electricity is applied to the gold electrodes (which are imprinted in the paper). The carbon nanotube pencil is used to fill in the gaps, and effectively acts as a resistor. When ammonia gas is present, the conductivity of the nanotubes decreases, and thus resistance increases — which can be easily measured. Carbon nanotubes are so sensitive that MIT’s hand-drawn sensor can detect concentrations of ammonia as low as 0.5 parts per million (ppm).
There are two main takeaways here. The first is that MIT has found a form of carbon nanotubes that is stable, safe, and cheap to produce. Second, carbon nanotubes have been used in sensors before, but usually the process involves dissolving SWCNTs in solvents, which can be dangerous. Here, creating a carbon nanotube sensor is as simple as drawing on a piece of paper — either by a human, or an automated process.
The team will now work on other carbon nanotube leads that can be used to detect other gases, such as ethylene (produced by fruit as it ripens) and sulfur (for detecting natural gas leaks). It’s also worth noting that the research was partly funded by the US Army/MIT Institute for Soldier Nanotechnologies — so it wouldn’t be surprising if military personnel are eventually outfitted with these sensors… or perhaps their very own carbon nanotube pencil, for MacGyver-like sensor fabrication in the field.
Saturday, October 13, 2012
New encryption method avoids hacks by saving your password in multiple locations
Taken from: http://www.extremetech.com/computing/137606-new-encryption-method-avoids-hacks-by-saving-your-password-in-multiple-locations
One of the central problems in modern computer security is the need to protect an ever-increasing amount of user data from an enormous array of potential threats. Researchers at the security firm RSA have proposed a new method of securing passwords from database hacks: breaking them into pieces and storing them in separation locations. From the user’s perspective, nothing would change — you’d visit a website, type your password, and log in normally.
Server-side authentication, however, would be considerably different. Currently, when you transmit a password to a website, the password is typically hashed or encrypted in some fashion. The server doesn’t actually store your password in plaintext, but the encrypted value of your password. Cryptographic hashing functions are theoretically reversible, but well-designed hashes are impossible to brute-force within reasonable amounts of time, using conventional hardware. The problem with this approach is it looks good in theory, but is often flawed in practice.
What RSA is proposing is a system that would break a password into two halves, and then store each half in different locations — perhaps on a different hard disk in the same data center, or maybe on the other side of the world. The separate versions are then hashed again to create a new string. The two password servers would then compare the new string to determine if the hash values match. If they do, the password is legitimate. If they don’t, the login fails.
Splitting the password between multiple servers ensures that if one server was compromised, the hackers would gain nothing but halved hash values with no way to combine them into an appropriate authentication scheme. Without knowledge of the second combination function, there’s no way to reverse engineer the hashes back to plaintext. The halved password hashes themselves would refresh periodically, further limiting the usefulness of a database hack.
This last issue highlights one of the most frustrating facts about online security. The size and scope of the technical attack vectors is overwhelming and shifts on a yearly basis as new methods are discovered or come into vogue. Still, the RSA approach does at least close one potential loophole, and it’s scalable. The current implementation uses two servers, but the method could be deployed across four or more separate locations.
One of the central problems in modern computer security is the need to protect an ever-increasing amount of user data from an enormous array of potential threats. Researchers at the security firm RSA have proposed a new method of securing passwords from database hacks: breaking them into pieces and storing them in separation locations. From the user’s perspective, nothing would change — you’d visit a website, type your password, and log in normally.
Server-side authentication, however, would be considerably different. Currently, when you transmit a password to a website, the password is typically hashed or encrypted in some fashion. The server doesn’t actually store your password in plaintext, but the encrypted value of your password. Cryptographic hashing functions are theoretically reversible, but well-designed hashes are impossible to brute-force within reasonable amounts of time, using conventional hardware. The problem with this approach is it looks good in theory, but is often flawed in practice.
What RSA is proposing is a system that would break a password into two halves, and then store each half in different locations — perhaps on a different hard disk in the same data center, or maybe on the other side of the world. The separate versions are then hashed again to create a new string. The two password servers would then compare the new string to determine if the hash values match. If they do, the password is legitimate. If they don’t, the login fails.
Splitting the password between multiple servers ensures that if one server was compromised, the hackers would gain nothing but halved hash values with no way to combine them into an appropriate authentication scheme. Without knowledge of the second combination function, there’s no way to reverse engineer the hashes back to plaintext. The halved password hashes themselves would refresh periodically, further limiting the usefulness of a database hack.
This last issue highlights one of the most frustrating facts about online security. The size and scope of the technical attack vectors is overwhelming and shifts on a yearly basis as new methods are discovered or come into vogue. Still, the RSA approach does at least close one potential loophole, and it’s scalable. The current implementation uses two servers, but the method could be deployed across four or more separate locations.
Using Kinect to turn any surface into a multi-user, multi-finger touchscreen
Taken from: http://www.extremetech.com/computing/137630-using-kinect-to-turn-any-surface-into-a-multi-user-multi-finger-touchscreen
With the aid of Microsoft’s much-loved Kinect sensor, engineers at Purdue University have created a system that can turn any surface — flat or otherwise — into a multi-user, multi-finger touchscreen.
The setup is disgustingly simple: You point the Kinect at some kind of surface, plug it into a computer, and then proceed to poke and prod that object as if it was a multi-finger touchscreen. Additionally, you can throw a projector into the mix, and the object becomes a multitouch display — a lot like (the original) Microsoft’s Surface tabletop display (which is now called PixelSense, as Microsoft decided to re-use the trademark with its upcoming tablets).
The magic, of course, is being performed in software. The first time you use the system, which the engineers call “Extended Multitouch,” the Kinect sensor analyzes the surface it’s pointing at. If you want to use a table as a touchscreen, for example, the Kinect takes a look at the table and works out exactly how far away the surface is (a depth map, in essence). Then, if you want to interact with the touchscreen, the Kinect works out how far away your fingers are — and if they’re the same distance as the table, the software knows you’re making contact. The software can do this for any number of fingers (or other implements, such as pens/styluses).
To test Extended Multitouch, the engineers wrote a few simple programs, including a sketching program that allowed users to draw with a pen and multiple fingers simultaneously. Overall accuracy is good, but not as reliable as displays with built-in touch sensors (such as your smartphone). As far as handedness and gesture recognition goes, accuracy is around 90% — but that will increase with sensor resolution and continued software tweaks.
Unlike built-in touch sensors, which are expensive and require a local computer controller, it would be quite easy to blanket a house in depth-sensing cameras. If you buried a grid of Kinects in your ceiling, you could turn most of your environment into a touch interface. In the words of Niklas Elmqvist, one of Extended Multitouch’s developers, “Imagine having giant iPads everywhere, on any wall in your house or office, every kitchen counter, without using expensive technology.”
With the aid of Microsoft’s much-loved Kinect sensor, engineers at Purdue University have created a system that can turn any surface — flat or otherwise — into a multi-user, multi-finger touchscreen.
The setup is disgustingly simple: You point the Kinect at some kind of surface, plug it into a computer, and then proceed to poke and prod that object as if it was a multi-finger touchscreen. Additionally, you can throw a projector into the mix, and the object becomes a multitouch display — a lot like (the original) Microsoft’s Surface tabletop display (which is now called PixelSense, as Microsoft decided to re-use the trademark with its upcoming tablets).
The magic, of course, is being performed in software. The first time you use the system, which the engineers call “Extended Multitouch,” the Kinect sensor analyzes the surface it’s pointing at. If you want to use a table as a touchscreen, for example, the Kinect takes a look at the table and works out exactly how far away the surface is (a depth map, in essence). Then, if you want to interact with the touchscreen, the Kinect works out how far away your fingers are — and if they’re the same distance as the table, the software knows you’re making contact. The software can do this for any number of fingers (or other implements, such as pens/styluses).
To test Extended Multitouch, the engineers wrote a few simple programs, including a sketching program that allowed users to draw with a pen and multiple fingers simultaneously. Overall accuracy is good, but not as reliable as displays with built-in touch sensors (such as your smartphone). As far as handedness and gesture recognition goes, accuracy is around 90% — but that will increase with sensor resolution and continued software tweaks.
Unlike built-in touch sensors, which are expensive and require a local computer controller, it would be quite easy to blanket a house in depth-sensing cameras. If you buried a grid of Kinects in your ceiling, you could turn most of your environment into a touch interface. In the words of Niklas Elmqvist, one of Extended Multitouch’s developers, “Imagine having giant iPads everywhere, on any wall in your house or office, every kitchen counter, without using expensive technology.”
Sunday, October 7, 2012
Backup master class: Are online file lockers backup?
Taken from: http://www.extremetech.com/computing/137429-backup-master-class-are-online-file-lockers-backup
Cloud storage has also become extremely popular. Asus offers online storage as a perk with its tablets. Google, Amazon, and Microsoft all have their own versions; Dropbox passed the 50 million user mark a year ago. This article discusses file lockers in general, but since we’re primarily concerned with backup rather than file sharing, it can’t be characterized as a review.
Most cyberlockers prioritize their sharing capabilities and don’t literally refer to themselves as backup services — but the language is very similar. For example:
Cloud storage has also become extremely popular. Asus offers online storage as a perk with its tablets. Google, Amazon, and Microsoft all have their own versions; Dropbox passed the 50 million user mark a year ago. This article discusses file lockers in general, but since we’re primarily concerned with backup rather than file sharing, it can’t be characterized as a review.
Most cyberlockers prioritize their sharing capabilities and don’t literally refer to themselves as backup services — but the language is very similar. For example:
- Amazon Cloud: “Never worry about losing your precious photos, documents and videos. Store them in your Cloud Drive where they will be protected from a hard drive crash or a lost or stolen laptop.”
- Dropbox: Even if you accidentally spill a latte on your laptop, have no fear! You can relax knowing that Dropbox always has you covered, and none of your stuff will ever be
- Google Drive: “Things happen... No matter what happens to your devices, your files are safely stored in Google Drive.” (emphasis original)
- Microsoft SkyDrive: “With SkyDrive, you can securely store your files.. The files and photos you store in SkyDrive are protected by first-rate security features.”
Do file lockers count as online backup services?
Yes and no. File lockers are backups in the sense that they store files in an offsite location. If your hard drive suddenly dies, the files you copied to a file locker will still be there. Like online backup services, most file lockers retain multiple versions of a file to allow you to revert changes to a document. Some also offer undelete protection; Microsoft’s SkyDrive recently implemented a Recycle Bin feature that allows you to recover files for up to 30 days after you’ve deleted them.
One of the key distinctions between a backup service and a file locker is that modern backup services typically auto-select key folders and files to back up by default. File lockers, in contrast, drop a new folder into Windows Explorer and have done. You can use Dropbox or Google Drive as a backup — provided that you manually configure all the files you want to back up to drop into appropriate subdirectories.
File lockers are often one component of a larger service, even if they’re offered to anyone who wants one. You don’t have to own a Kindle, use Google Docs, or make use of Microsoft’s various online services to use Cloud Drive, Google Drive, or Sky Drive, but all three companies make certain you’re aware of those options at each and every turn. Most content management is either handled in Windows Explorer or via a basic browser interface.
Cloud lockers also tend to handle sharing and locality differently than backup services. Mozy and Carbonite offer mobile clients that you can use to access archived data, but they don’t include external links that another person can use to access your files. As for locality, backup services nearly unilaterally insist that any data that’s to be kept backed up is also kept locally. Many file lockers offer a more nuanced policy.
Scientists planning $1 billion mission to drill into the Earth’s mantle
Taken from: http://www.extremetech.com/extreme/137281-scientists-planning-1-billion-mission-to-drill-into-the-earths-mantle
Deep below the our feet, past the thin crust of our planet, lies the mantle. Despite making up the vast majority of the Earth’s mass, we know very little about the composition of this region. What could be there? Mole-men? Crab people? Probably nothing so fanciful, but a team of international researchers are about to find out. At an estimated cost of $1 billion, geologists headed by the Integrated Ocean Drilling Program (IODP) are preparing to start drilling into the mantle for the first time.
The mantle is a 3000 km thick layer of super-heated, mostly solid, rock that fills the space between the human-dominated crust, and the dense iron-rich core. It is believed that knowing more about the makeup of the mantle could have significant impact on our understanding of the origins and nature of Earth. Everything from seismology to climatology and plate tectonics could be affected.
This isn’t going to be an afternoon excursion to the drilling rig, though. It’s going to take a long time to reach the mantle, which is a minimum of 6 km beneath the crust under the best conditions. After considering various methods, researchers have decided to drill through the crust in the Pacific Ccean, which is the only place the 6 km figure holds true. On dry land the crust can be ten times thicker.
To reach the mantle, scientists will be using a custom-built Japanese drilling rig called Chikyu. The Chikyu was first launched in 2002, and is capable of carrying 10 km of drilling pipes. The team is going to need most of that to get down the the seabed and through to the mantle. The Chikyu holds the current deep sea drilling record, having made it 2.2km into the seafloor. This will be a much greater challenge.
The goal is impressive all on its own, but it isn’t until you look at the logistics of making it all work that you realize what a monumental undertaking this is. The high-tech drill bits being used to bore down into the crust only have an active lifespan of 50-60 hours. After that, the team will have to back out of the hole, change the bit, and plunge back down to the murky depths. To top it off, the borehole is only a 30 cm across… and at the bottom of the sea.
One researcher involved in the project, Damon Teagle, described this procedure as trying to align a steel tube the width of a human hair with a 1/10mm hole when it’s at the bottom of a swimming pool. Certainly accomplishments like the Curiosity/MSL landing are an example of great science, but here we have some amazingly precise science happening right on Earth.
With the technology available today, researchers at the IODP believe that it will take years to reach the mantle. It’s going to be time consuming to change out those drill bits every few days. Teagle suspects that the project could get underway within the next few years. Barring a significant advancement in drilling technology, we should get our first samples from the mantle in the early 2020s.
This project might not have the sexiness of landing a rover on Mars, but it has the potential to vastly increase our knowledge about the evolution and fate of our planet. For the time being it’s the only one we’ve got, so that’s important knowledge to have.
Deep below the our feet, past the thin crust of our planet, lies the mantle. Despite making up the vast majority of the Earth’s mass, we know very little about the composition of this region. What could be there? Mole-men? Crab people? Probably nothing so fanciful, but a team of international researchers are about to find out. At an estimated cost of $1 billion, geologists headed by the Integrated Ocean Drilling Program (IODP) are preparing to start drilling into the mantle for the first time.
The mantle is a 3000 km thick layer of super-heated, mostly solid, rock that fills the space between the human-dominated crust, and the dense iron-rich core. It is believed that knowing more about the makeup of the mantle could have significant impact on our understanding of the origins and nature of Earth. Everything from seismology to climatology and plate tectonics could be affected.
This isn’t going to be an afternoon excursion to the drilling rig, though. It’s going to take a long time to reach the mantle, which is a minimum of 6 km beneath the crust under the best conditions. After considering various methods, researchers have decided to drill through the crust in the Pacific Ccean, which is the only place the 6 km figure holds true. On dry land the crust can be ten times thicker.
To reach the mantle, scientists will be using a custom-built Japanese drilling rig called Chikyu. The Chikyu was first launched in 2002, and is capable of carrying 10 km of drilling pipes. The team is going to need most of that to get down the the seabed and through to the mantle. The Chikyu holds the current deep sea drilling record, having made it 2.2km into the seafloor. This will be a much greater challenge.
The goal is impressive all on its own, but it isn’t until you look at the logistics of making it all work that you realize what a monumental undertaking this is. The high-tech drill bits being used to bore down into the crust only have an active lifespan of 50-60 hours. After that, the team will have to back out of the hole, change the bit, and plunge back down to the murky depths. To top it off, the borehole is only a 30 cm across… and at the bottom of the sea.
One researcher involved in the project, Damon Teagle, described this procedure as trying to align a steel tube the width of a human hair with a 1/10mm hole when it’s at the bottom of a swimming pool. Certainly accomplishments like the Curiosity/MSL landing are an example of great science, but here we have some amazingly precise science happening right on Earth.
With the technology available today, researchers at the IODP believe that it will take years to reach the mantle. It’s going to be time consuming to change out those drill bits every few days. Teagle suspects that the project could get underway within the next few years. Barring a significant advancement in drilling technology, we should get our first samples from the mantle in the early 2020s.
This project might not have the sexiness of landing a rover on Mars, but it has the potential to vastly increase our knowledge about the evolution and fate of our planet. For the time being it’s the only one we’ve got, so that’s important knowledge to have.
Microsoft promises major Windows 8 app improvements before Oct 26 launch
Taken from: http://www.extremetech.com/computing/137469-microsoft-promises-major-windows-8-app-improvements-before-oct-26-launch
Microsoft has taken some flak for the purported condition of Windows 8 in recent weeks; Intel and Redmond tangled on the topic last week in a bit of corporate he-said/she-said. A new blog post from the Building Windows 8 team indirectly addresses some of the concerns potential W8 adopters might have in the wake of the public spat, by promising a number of updates will be delivered between now and launch day.
According to Gabriel Aul, new updates will start rolling out today with an improved Bing app, but that’s just the beginning. After Bing, Microsoft is rolling out improvements to SkyDrive, Mail, Photos, Maps, its News service, and a number of others.
The full list of app changes is available on the BW8 blog. We’ve put together a some of the highlights and most important differences below:
One of the most significant deficiencies of the current Mail client in Windows 8 is that it lacks support for POP or IMAP. Microsoft apparently doesn’t plan to support POP at all, but IMAP, at least, is coming before launch day. Other changes are clearly intended to improve Metro functionality (Windows Photo) and simplify switching between the Window 8-style UI (Metro) and Desktop. Content pagination and zoom levels are other areas where Windows 8 has been measured and found wanting; these updates will hopefully solve some of the ongoing problems.
Closed caption support, meanwhile, might seem like a low-priority issue, but it’s actually a major concern for the hearing impaired. The FCC ruled earlier this year that broadcasters had to begin including closed caption support in streaming video; Windows 8 support is an important step to making such capability ubiquitous.
Other improvements are less clear. “Improved offline reading experience” and “rich ‘now playing’ experience” don’t tell us much about the new features or what users can expect. Integrating content from the New York Times and Wall Street Journal will improve the range of information “News” presents, but we’ll be curious to see how Windows 8 treats the NYT’s 10-article-a-month preview option and the WSJ’s paywall.
Part of noteworthy here is how Redmond has successfully changed updates from something it delivers to a monolithic OS to fix security issues into targeted features and bug fixes that expand application capability. To be sure, some of that expanded capability is being put towards things Windows 8 should’ve done before it went RTM — but applying continuous improvements on an app level lets the company talk about new features far more effectively than the standard Windows Update screen.
Microsoft has taken some flak for the purported condition of Windows 8 in recent weeks; Intel and Redmond tangled on the topic last week in a bit of corporate he-said/she-said. A new blog post from the Building Windows 8 team indirectly addresses some of the concerns potential W8 adopters might have in the wake of the public spat, by promising a number of updates will be delivered between now and launch day.
According to Gabriel Aul, new updates will start rolling out today with an improved Bing app, but that’s just the beginning. After Bing, Microsoft is rolling out improvements to SkyDrive, Mail, Photos, Maps, its News service, and a number of others.
The full list of app changes is available on the BW8 blog. We’ve put together a some of the highlights and most important differences below:
One of the most significant deficiencies of the current Mail client in Windows 8 is that it lacks support for POP or IMAP. Microsoft apparently doesn’t plan to support POP at all, but IMAP, at least, is coming before launch day. Other changes are clearly intended to improve Metro functionality (Windows Photo) and simplify switching between the Window 8-style UI (Metro) and Desktop. Content pagination and zoom levels are other areas where Windows 8 has been measured and found wanting; these updates will hopefully solve some of the ongoing problems.
Closed caption support, meanwhile, might seem like a low-priority issue, but it’s actually a major concern for the hearing impaired. The FCC ruled earlier this year that broadcasters had to begin including closed caption support in streaming video; Windows 8 support is an important step to making such capability ubiquitous.
Other improvements are less clear. “Improved offline reading experience” and “rich ‘now playing’ experience” don’t tell us much about the new features or what users can expect. Integrating content from the New York Times and Wall Street Journal will improve the range of information “News” presents, but we’ll be curious to see how Windows 8 treats the NYT’s 10-article-a-month preview option and the WSJ’s paywall.
Part of noteworthy here is how Redmond has successfully changed updates from something it delivers to a monolithic OS to fix security issues into targeted features and bug fixes that expand application capability. To be sure, some of that expanded capability is being put towards things Windows 8 should’ve done before it went RTM — but applying continuous improvements on an app level lets the company talk about new features far more effectively than the standard Windows Update screen.
Subscribe to:
Posts (Atom)