Wednesday, May 8, 2013

Glowing plants


A glowing plant that could provide a sustainable light source has caught the imagination of backers on the crowdfunding website Kickstarter.
With a month still to go, the project has raised $243,000 (£157,000). Its initial goal was $65,000.
Backers are promised seeds for glowing plants, although delivery will not be until next May at the earliest.
The "biohacking" team behind the project said that in future trees could act as street lights.
The researchers are keen that their mix of DIY synthetic biology and sustainable lighting remains open-source.
"Inspired by fireflies... our team of Stanford-trained PhDs are using off-the-shelf methods to create real glowing plants in a do-it-yourself bio lab in California," said project leader Antony Evans.
"All of the output from this project will be released open-source, the DNA constructs, the plants etc," it said on its website.
Commercially appealing
The research team, led by synthetic biologist Omri Amirav-Drory and plant scientist Kyle Taylor, aims to transplant a fluorescent gene into a small plant called Arabidopsis, a member of the mustard family.
The team has chosen this plant as it is easy to experiment with and carries minimal risk for spreading into the wild.
However, it hopes that the same process will work for a rose, which it considers to be more commercially appealing.
The team will work with luciferase, an enzyme common in fireflies as well as some glowing fungi and bacteria.
The researchers have already designed the DNA sequences using software from a company called Genome Compiler, which allows people to easily design genetic sequences.
They will then "print the DNA" and the final stage will be to transfer this to the plants.
'Great inspiration'
Initially the genes are transferred to agrobacteria, increasingly used in genetic engineering because they can transfer DNA between themselves and plants.

Start Quote

Biology is very energy-efficient and energy packets are more dense than batteries. Even a weakly glowing flower would be a great icon”
Prof George ChurchGeneticist, Harvard Medical School
This method will only be used for prototypes as the bacteria are plant pests and any use of such organisms is heavily regulated.
For the seeds that will be sent to the public, the team will use a gene gun that effectively coats nanoparticles with DNA and fires them into plants. This method is not subject to regulation.
George Church, a professor of genetics at Harvard Medical School who is backing the project, said that biology could provide great inspiration for more sustainable light sources.
"Biology is very energy-efficient and energy packets are more dense than batteries. Even a weakly glowing flower would be a great icon."
The team is not the first to create glowing plants.
'Pretty enticing'
In 2008 scientists at the University of California created a glowing tobacco plant, using luciferase.
And in 2010 researchers from the University of Cambridge were able to make bacteria glow sufficiently to read by.
Theo Sanderson, a member of that Cambridge team, has blogged about the new attempt.
"Nobody can deny that the idea of walking down a path lit by glowing trees is pretty enticing... what has disappointed me has been the lack of discussion as to what the team actually plan to do with the funds raised, and whether the science stacks up," he said.
"My prediction is that this project will ship plants which have a dimly visible luminescence in a pitch-black room."

伤心太平洋



辽宁盘锦
离开真的残酷吗
或者 温柔才是可耻的
或者 孤独的人无所谓
无日无夜 无条件
前面真的危险吗
或者 背叛才是体贴的
或者 逃避比较容易吧
风言风语风吹沙
往前步一是黄昏
退后一步是人生
风不平 浪不静
心还不安稳
一个岛锁住一个人
我等的船还不来
我等的人还不明白
寂寞默默沈没 沈入海
未来不再我还在
如果潮去心也去
如果潮来你还不来
浮浮沈沈 往事浮上来
回忆回来 你已不在
一波还未平息
一波又来侵袭
茫茫人海狂风暴雨
一波还来不及
一波早就过去
一生一世 如梦初醒
深深太平洋底 深深伤心
QQ362162190
离开真的残酷吗
或者 温柔才是可耻的
或者 孤独的人无所谓
无日无夜 无条件
往前步一是黄昏
退后一步是人生
风不平 浪不静
心还不安稳
一个岛锁住一个人
我等的船还不来
我等的人还不明白
寂寞默默沈没 沈入海
回忆回来 你已不在
一波还未平息
一波又来侵袭
茫茫人海狂风暴雨
一波还来不及
一波早就过去
一生一世 如梦初醒
深深太平洋底 深深伤心
一波还未平息(一波又来侵袭)
一波又来侵袭(茫茫人海狂风暴雨)
一波还来不及(一波早就过去)
一波早就过去(一生一世 如梦初醒)
深深太平洋底 深深伤心
深深太平洋底 深深伤心

Monday, April 1, 2013

Why Computing Won't Be Limited By Moore's Law. Ever



In less than 20 years, experts predict, we will reach the physical limit of how much processing capability can be squeezed out of silicon-based processors in the heart of our computing devices. But a recent scientific finding that could completely change the way we build computing devices may simply allow engineers to sidestep any obstacles.
The breakthrough from materials scientists at IBM Research doesn't sound like a big deal. In a nutshell, they claim to have figured out how to convert metal oxide materials, which act as natural insulators, to a conductive metallic state. Even better, the process is reversible.
Shifting materials from insulator to conductor and back is not exactly new, according to Stuart Parkin, IBM Fellow at IBM Research. What is new is that these changes in state are stable even after you shut off the power flowing through the materials.
And that's huge.
Power On… And On And On And On…
When it comes to computing — mobile, desktop or server — all devices have one key problem: they're inefficient as hell with power.
As users, we experience this every day with phone batteries dipping into the red, hot notebook computers burning our laps or noisily whirring PC fans grating our ears. System administrators and hardware architects in data centers are even more acutely aware of power inefficiency, since they run huge collections of machines that mainline electricity while generating tremendous amounts of heat (which in turn eats more power for the requisite cooling systems).
Here's one basic reason for all the inefficiency: Silicon-based transistors must be powered all the time, and as current runs through these very tiny transistors inside a computer processor, some of it leaks. Both the active transistors and the leaking current generate heat — so much that without heat sinks, water lines or fans to cool them, processors would probably just melt.
Enter the IBM researchers. Computers process information by switching transistors on or off, generating binary 1s and 0s.  processing depends on manipulating two states of a transistor: off or on, 1s or 0s — all while the power is flowing. But suppose you could switch a transistor with just a microburst of electricity instead of supplying it constantly with current. The power savings would be enormous, and the heat generated, far, far lower.
That's exactly what the IBM team says it can now accomplish with its state-changing metal oxides. This kind of ultra-low power use is similar to the way neurons in our own brains fire to make connections across synapses, Parkin explained. The human brain is more powerful than the processors we use today, he added, but "it uses a millionth of the power."
The implications are clear. Assuming this technology can be refined and actually manufactured for use in processors and memory, it could form the basis of an entirely whole new class of electronic devices that would barely sip at power. Imagine a smartphone with that kind of technology. The screen, speakers and radios would still need power, but the processor and memory hardware would barely touch the battery.
Moore's Law? What Moore's Law?
There's a lot more research ahead before this technology sees practical applications. Parkin explained that the fluid used to help achieve the steady state changes in these materials needs to be more efficiently delivered using nano-channels, which is what he and his fellow researchers will be focusing on next.
Ultimately, this breakthrough is one among many that we have seen and will see in computing technology. Put in that perspective, it's hard to get that impressed. But stepping back a bit, it's clear that the so-called end of the road for processors due to physical limits is probably not as big a deal as one would think. True, silicon-based processing may see its time pass, but there are other technologies on the horizon that should take its place.
Now all we have to do is think of a new name for Silicon Valley.

Source : http://readwrite.com/2013/03/30/computing-wont-be-limited-by-moores-law-ever

Tuesday, February 5, 2013

Scientists successfully store data in DNA



Scientists have successfully stored an audio recording of Martin Luther King Jr.’s “I Have a Dream” speech, plus Shakespearean sonnets and more, on a strand of synthetic DNA. They were able to translate, store, and retrieve this data. This strand of DNA is much like the DNA contained in the cells of living organisms. Furthermore, these scientists claim that it may be possible to store a billion books’ worth of data – for thousands of years – in a single small test tube. They published this result in the journal Nature on January 23, 2013.

Storing information is what DNA does best. DNA holds the genetic code of each species, and spells out the exact instructions required to create a particular organism. The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). The information stored on computers is binary, meaning the data is represented by 1s and 0s. Scientists have long tried to replicate nature’s way of storing information, but it’s been elusive until now.

The European Bioinformatics Institute generates a huge amount of data, and data storage is a real concern. Goldman and his colleague, Ewan Birney, dreamed up the solution over a few beers one evening. Previous attempts at encoding data onto DNA failed because those methods tried to translate a computer’s binary data directly onto the DNA – and binary repetition caused errors in retrieval. The team was able to translate the binary information into ternary information (that uses 0, 1, and 2), and then encode that into the DNA. The researchers think their system might be able to store the roughly 3 zettabytes (a zettabyte is one billion trillion bytes) of digital data thought to presently exist in the world!


DNA double helix
Their work was published this week in the journal Nature, nearly 60 years after Watson and Crick published Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid – the landmark paper that detailed the double helical structure of DNA – in the same journal on April 25, 1953.

Pretty amazing breakthrough!

Source : http://earthsky.org/human-world/scientists-successfully-store-data-in-dna

Sunday, January 20, 2013

Facebook graph search ???


Facebook's Graph Search is the future of search. Even before Google was a verb, the search engine Holy Grail was to deliver you the most relevant search results despite not knowing who you were and what exactly you were looking for. Now Facebook can stop guessing who you are—because it already knows you—and start serving up hyper-personalized answers tailored to you and based on the Facebook social universe.

Search leaders haven't been sitting idly by. Google's own hyper-personal search tool is called Google Now and landed on desktop search just last month. Microsoft's Bing has woven what it calls Social Search deep into its search engine. The hyper-personal search race has already been sparked; Facebook’s Graph Search ignites the revolution.

Getting personal

Personalized search is nothing new. We've gotten whiffs of the benefits of personalized search over time. Netflix has spent years honing its recommendation engine designed to keep you coming back to watch more movies and TV shows. Amazon recommends books, music, and numerous other products based on your past purchases. Pandora developed an algorithm that can generate playlists based on songs you tell it you like.

The secret to the success of Amazon, Netflix, and Pandora is the scope of the guessing was limited to you and a defined by a limited number of products, movies, and songs. The challenge for the leaders in search, Microsoft's Bing and Google, was that the data set was everything under the sun and you were an unknown. It's easier to create a search algorithm that uses past movie preferences to guess what similar movies you like. It’s much harder for Bing to guess what movie you'd like based on the query "find me a really funny movie that I'd like."


Now Bing, Google, and Facebook can begin to know who you are, who your friends are, your likes, where you go, about that failed diet, and where you vacation. The results are good, if we don't get too hung up on the privacy debate. In the age of Big Data, search engines can sift through your digital dossier and pair that with relevant search results.

But the bigger question is: How do (and will) hyper-personalized results differ between Bing, Facebook, and Google? Each service can't be the same by the nature of what they are, even if they all want to offer the same best search result.

Who knows best?
For example, ask Facebook's Graph Search for “friends of friends who have been to Yosemite National Park.” Theoretically, this query could link you to friends that might be able to give you tips on where to hike. You'd never be able to search Google and get a list of friend's names who have been to Yosemite, but you might be able to find better trails to hike that match your interests.

Facebook knows all about your personal relationships and your interests. Our early opinion of Facebook’s Graph Search proved underwhelming, but its potential is great.


Google knows about your Web habits, your most frequently emailed contacts, and your calendar appointments. A year ago it announced Search Plus Your World, a push to personalize your search results by including more Google+ profiles, business pages, posts, and Google+ and Picasa photos among the returns.  Last month it ported its Android 4.1 OS Google Now, a so-called intelligent personal assistant, to appear in desktop searches. Google Now uses the Google Knowledge Graph to present search results Google predicts you'll find useful, based on your search habits.

Bing relies on its partnerships to help personalize searches. And Bing's closest friend, thanks to a pricey investment, is Facebook. Bing harnesses the power of Facebook with its Bing social sidebar. Just this past week, Bing updated its sidebar with five times more Facebook data, Microsoft says. Bing social sidebar includes topically related status updates, shared links, and comments from Facebook friends. It also draws publicly shared data from high-profile users on other social networks, such as Twitter, Quora, Klout, Foursquare, and Google+.

Hitting the road
The wild card for personalized search is mobile. The biggest chance for hyper-personalization comes from the always-on and location-aware mobile devices we carry around with us every day. Market researchers at Comscore’s latest data suggest search is migrating away from desktops to mobile devices.


Facebook’s Graph Search on a phone may be years away. But one can imagine a mobile Graph Search app alerting us to, say, what percentage of Facebook friends liked a particular restaurant as we stroll past it.

Mobile technology both creates new data-collecting possibilities for search engines and allows them to be more situation-aware, delivering relevant results based on behavior patterns, the context of what you are doing, and when you're doing it. Google’s Google Now service says it “gets you just the right information at just the right time.” But in my experience with Google Now, it hasn’t yet delivered on that promise.

Try asking Google Now on your smartphone to “find me Starbucks” while navigating with Google Maps on a road trip. If you’re lucky, Google will find you a Starbucks just a few exits ahead. But it’s my experience Google Now more often than not chokes and spits up directions to a Starbucks I passed 20 minutes earlier.

I can’t decide who has collected the data about me. Is it Google or Facebook? When it comes to hyper-personalized search, maybe it doesn’t matter who has the biggest data set. Facebook’s beta version of Graph Search isn’t winning over the critics, yet. But the hyper-personalization search wars have just gotten started. By 2014, who knows: Maybe Facebook will find me a Starbucks 90 percent of my “friends” like, just down the road a few exits.

Source : http://www.pcworld.com/article/2025799/how-facebook-graph-search-will-ignite-a-search-revolution.html

Monday, January 14, 2013

Oracle Ships Critical Security Update for Java


Oracle has released a software update to fix a critical security vulnerability in its Java software that miscreants and malware have been exploiting to break into vulnerable computers.
Java 7 Update 11 fixes a critical flaw (CVE-2013-0422) in Java 7 Update 10 and earlier versions of Java 7. The update is available via Oracle’s Web site, or can be downloaded from with Java via the Java Control Panel. Existing users should be able to update by visiting the Windows Control Panel and clicking the Java icon, or by searching for “Java” and clicking the “Update Now” button from the Update tab.
This update also changes the way Java handles Web applications. According to Oracle’s advisory: “The default security level for Java applets and web start applications has been increased from “Medium” to “High”. This affects the conditions under which unsigned (sandboxed) Java web applications can run. Previously, as long as you had the latest secure Java release installed applets and web start applications would continue to run as always. With the “High” setting the user is always warned before any unsigned application is run to prevent silent exploitation.”
If you need Java for a specific Web site, consider adopting a two-browser approach. If you normally browse the Web with Firefox, for example, consider disabling the Java plugin in Firefox, and then using an alternative browser (Chrome, IE9, Safari, etc.) with Java enabled to browse only the site(s) that require(s) it.

Friday, January 4, 2013

Video Analysis: Detecting Text Every Where



As video recording technology improves in performance and falls in price, ever-more events are being captured within video files. If all of this footage could be searched effectively, it would represent an invaluable information repository. One option to help catalogue large video databases is to extract text, such as street signs or building names, from the background of each recording. Now, a method that automates this process has been developed by a research team at the National University of Singapore, which also included Shijian Lu at the A*STAR Institute for Infocomm Research.

Previous research into automated text detection within images has focused mostly on document analysis. Recognizing background text within the complex scenes typically captured by video is a much greater challenge: it can come in any shape or size, be partly occluded by other objects, or be oriented in any direction.

The multi-step method for automating text recognition developed by Lu and co-workers overcomes these challenges, particularly the difficulties associated with multi-oriented text. Their method first processes video frames using 'masks' that enhance the contrast between text and background. The researchers developed a process to combine the output of two known masks to enhance text pixels without generating image noise. From the contrast-enhanced image, their method then searches for characters of text using an algorithm called a Bayesian classifier, which employs probabilistic models to detect the edges of each text character.
Even after identifying all characters in an image, a key challenge remains, explains Lu. The software must detect how each character relates to its neighbors to form lines of text -- which might run in any orientation within the captured scene. Lu and his co-workers overcame this problem using a so-called 'boundary growing' approach. The software starts with one character and then scans its surroundings for nearby characters, growing the text box until the end of the line of text is found. Finally, the software eliminates false-positive results by checking that identified 'text boxes' conform to certain geometric rules.

Tests using sample video frames confirmed that the new method is the best yet at identifying video text, especially for text not oriented horizontally within the image, says Lu. However, there is still room for refinement, such as adapting the method to identify text not written in straight lines. "Document analysis methods achieve more than 90% character recognition," Lu adds. "The current state-of-the-art for video text is around 67-75%. There is a demand for improved accuracy."

Thursday, January 3, 2013

Speech-Based Emotion Classification Developed


If you think having your phone identify the nearest bus stop is cool, wait until it identifies your mood. New research by a team of engineers at the University of Rochester may soon make that possible. At the IEEE Workshop on Spoken Language Technology on Dec. 5, the researchers will describe a new computer program that gauges human feelings through speech, with substantially greater accuracy than existing approaches.

Surprisingly, the program doesn't look at the meaning of the words. "We actually used recordings of actors reading out the date of the month -- it really doesn't matter what they say, it's how they're saying it that we're interested in," said Wendi Heinzelman, professor of electrical and computer engineering.

Heinzelman explained that the program analyzes 12 features of speech, such as pitch and volume, to identify one of six emotions from a sound recording. And it achieves 81 percent accuracy -- a significant improvement on earlier studies that achieved only about 55 percent accuracy.

The research has already been used to develop a prototype of an app. The app displays either a happy or sad face after it records and analyzes the user's voice. It was built by one of Heinzelman's graduate students, Na Yang, during a summer internship at Microsoft Research. "The research is still in its early days," Heinzelman added, "but it is easy to envision a more complex app that could use this technology for everything from adjusting the colors displayed on your mobile to playing music fitting to how you're feeling after recording your voice." Heinzelman and her team are collaborating with Rochester psychologists Melissa Sturge-Apple and Patrick Davies, who are currently studying the interactions between teenagers and their parents. "A reliable way of categorizing emotions could be very useful in our research,." Sturge-Apple said. "It would mean that a researcher doesn't have to listen to the conversations and manually input the emotion of different people at different stages."

Teaching a computer to understand emotions begins with recognizing how humans do so."You might hear someone speak and think 'oh, he sounds angry!' But what is it that makes you think that?" asks Sturge-Apple. She explained that emotion affects the way people speak by altering the volume, pitch and even the harmonics of their speech. "We don't pay attention to these features individually, we have just come to learn what angry sounds like -- particularly for people we know," she adds. But for a computer to categorize emotion it needs to work with measurable quantities. So the researchers established 12 specific features in speech that were measured in each recording at short intervals. The researchers then categorized each of the recordings and used them to teach the computer program what "sad," "happy," "fearful," "disgusted," or "neutral" sound like.

The system then analyzed new recordings and tried to determine whether the voice in the recording portrayed any of the known emotions. If the computer program was unable to decide between two or more emotions, it just left that recording unclassified. "We want to be confident that when the computer thinks the recorded speech reflects a particular emotion, it is very likely it is indeed portraying this emotion," Heinzelman explained. 

Previous research has shown that emotion classification systems are highly speaker dependent; they work much better if the system is trained by the same voice it will analyze. "This is not ideal for a situation where you want to be able to just run an experiment on a group of people talking and interacting, like the parents and teenagers we work with," Sturge-Apple explained.

Their new results also confirm this finding. If the speech-based emotion classification is used on a voice different from the one that trained the system, the accuracy dropped from 81 percent to about 30 percent. The researchers are now looking at ways of minimizing this effect, for example, by training the system with a voice in the same age group and of the same gender. As Heinzelman said, "there are still challenges to be resolved if we want to use this system in an environment resembling a real-life situation, but we do know that the algorithm we developed is more effective than previous attempts."

Citation : University of Rochester (2012, December 4). Smartphones might soon develop emotional intelligence: Algorithm for speech-based 

Friday, December 21, 2012

Khmer Like Mayans



On different sides of the planet, two civilizations shared many similarities, including a vulnerability to regional climate changes.
The classical period Mayans in Central America and the Khmer in Southeast Asia both hacked a space for their people out of tropical forests and constructed impressive stone cities with sophisticated water storage systems. But their Achilles heel was a dependence on seasonal rains for their crops and drinking water. When regional climate changes caused erratic rainfall, their cities and fields may have dried out and left them vulnerable to collapse.
Between the 9th and 15th centuries, the Khmer Empire dominated much of what is now Laos, Cambodia, Thailand, Vietnam and Burma (Myanmar). The empire's famous capital, Angkor, may have been one of the largest pre-industrial city complexes in the world. The empire was supported by an extensive water management system, including lake-sized reservoirs called barays.
Westernbaray2
But reservoirs only work if there is rainwater to fill them. Research recently published in theProceedings of the National Academy of Sciences suggests that a series of erratic monsoons may have destabilized the once-mighty Khmer.

Researchers led by Mary Beth Day, an earth scientist with the University of Cambridge, found evidence of a series of failed monsoons in 14th and 15th century that coincided with the Khmer's collapse. Sediments in the largest Khmer resevoir, the West Baray, showed that extremely heavy downpours were followed by drought. The heavy rains may have washed away crops, and those that survived shriveled in the drought. During the dry spells, drinking water may have run low as well.
Although many other factors came into play, such as the threat of Mongol invasion and social upheaval brought on by the spread of Theravada Buddhism, the researchers note that an inability to feed their people could have weakened the Khmer to the point of collapse.
The Mayans of the classical period (c. 250 – 900 AD) could have warned the Khmer. While the Khmer were rising, the classical Mayans were falling, possible because the rains weren't.
Cenote_Xlacah_-_P1110794
The Mayans too seem to have been dependent on seasonal rains. They often built their cities near natural reservoirs called cenotes or near rivers, then augmented nature with their own water management systems, such as the dam at Kinal andreservoirs at Uxul. Mayans even had pressurized water.

But when the rains failed, so did some of the Mayan city states, suggest some researchers including Jared Diamond in the book Collapse.
Hacking a home out of the forest may have had an effect on rainfall, as well. Forests help to create rainfall when they respire moisture and help keep the ground moist. When large amounts of forest were cleared for agriculture, it may have reduced rainfall. When the rains do come, they wash away the soil which is no longer protected by the trees.
The causes of a civilizations' failure are complex, and the verdict is still out on what ultimately toppled the Mayans and Khmer. But evidence suggests decades long changes in rainfall patterns may have had a serious effect.

Source : http://news.discovery.com/earth/khmer-collapsed-under-climate-pressure-120105.html