The Future Hypersonic Travel

 Is America's Spy Plane Back — and Hypersonic?

 The SR-71 Blackbird spy plane may be back and faster than ever.
Or maybe not. Ambiguous wording at an aerospace-conference presentation last week suggested that the SR-72, a successor to the infamous Cold War spy plane, might already be in production, but neither the military nor the plane's possible maker, Lockheed Martin Corp., is talking.
Bloomberg reported that Lockheed Vice President Jack O'Banion projected an artist's conception of the hypersonic SR-72 during a talk at the annual SciTech Forum of the American Institute of Aeronautics and Astronautics in Florida on Jan. 8. Standing by the image of the sleek gray aircraft, O'Banion reportedly spoke about recent advances in computing and design and then said, "Without the digital transformation, the aircraft you see there could not have been made." [Supersonic! The 11 Fastest Military Airplanes]
The wording implies that the aircraft has, in fact, been made, as does O'Banion's later comments, also in the present tense.
"We couldn't have made the engine itself — it would have melted down into slag if we had tried to produce it five years ago," O'Banion said, according to Bloomberg. "But now, we can digitally print that engine with an incredibly sophisticated cooling system integral into the material of the engine itself, and have that engine survive for multiple firings for routine operation."
Both the U.S. Air Force and Lockheed Martin declined to confirm the existence of a real hypersonic (more than 5 times the speed of sound) spy plane to Bloomberg. Outside defense experts told the news outlet that the plane could be anywhere in the development process, from the digital design phase to the prototype phase.
The SR-71 Blackbird was a speedy, stealthy spy plane specifically designed to absorb, rather than reflect, radar signals and, hence, remain relatively hidden. Though it's more than 107 feet (33 meters) long, the plane appeared on radar as an object somewhere between the size of a bird and a human, according to Lockheed Martin's history of the plane.
The Blackbird was made from titanium so that it would have the strength to withstand the heat generated while flying at above Mach 3 (2,045 mph or 3,300 km/h), or three times the speed of sound. Developers painted the plane black to dissipate some of the heat, lending the aircraft its "Blackbird" moniker, according to Lockheed Martin. The two-seat SR-71 first flew on Dec. 22, 1964. It was retired in 1990, retaining the record of fastest manned aircraft ever made. According to the Smithsonian Air and Space Museum, the final flight of the Blackbird now displayed at its hangar in Chantilly, Virginia, set a speed record: The plane flew from Los Angeles to Washington, D.C., in 64 minutes and 20 seconds. 
Lockheed Martin's interest in developing hypersonic "scramjets" — which use oxygen from the environment rather than a tank for combustion in the engine — is no secret, according to Bloomberg. However, the engineering challenges are steep.
Jets like the purported SR-72 could enable the Air Force to make ultrafast bombing runs into enemy airspace without being detected, defense analyst Richard Aboulafia told Bloomberg. Lockheed Martin engineers said in 2013 that they hoped to develop an SR-72 that could fly at Mach 6 (4,603 mph or 7,407 km/h) by 2030.
Secrecy and rumors are the norm in the development of new stealth aircraft. According to a Central Intelligence Agency history of the development of the SR-71, the government tried very hard to keep the development of the original Blackbird classified, but a former U.S. Navy admiral named John B. Pearson said he figured out that Lockheed was up to something by 1961, just three years after the project started. A newspaper in Fort Worth, Texas, also reported the possibility of a new faster-than-sound airplane in 1963.
Editor's Note: This article was updated to indicate that the SR-71 was not hypersonic, but instead could fly at three times the speed of sound.
Share:

Digitized DNA

DNA Has Gone Digital — What Could Possibly Go Wrong?

Biology is becoming increasingly digitized. Researchers like us use computers to analyze DNA, operate lab equipment and store genetic information. But new capabilities also mean new risks – and biologists remain largely unaware of the potential vulnerabilities that come with digitizing biotechnology.
The emerging field of cyberbiosecurity explores the whole new category of risks that come with the increased use of computers in the life sciences.
University scientists, industry stakeholders and government agents have begun gathering to discuss these threats. We've even hosted FBI agents from the Weapons of Mass Destruction Directorate here at Colorado State University and previously at Virginia Tech for crash courses on synthetic biology and the associated cyberbiosecurity risks. A year ago, we participated in a U.S. Department of Defense-funded project to assess the security of biotechnology infrastructures. The results are classified, but we disclose some of the lessons learned in our new Trends in Biotechnology paper.
Along with co-authors from Virginia Tech and the University of Nebraska-Lincoln, we discuss two major kinds of threats: sabotaging the machines biologists rely on and creating dangerous biological materials.
In 2010, a nuclear plant in Iran experienced mysterious equipment failures. Months later, a security firm was called in to troubleshoot an apparently unrelated problem. They found a malicious computer virus. The virus, called Stuxnet, was telling the equipment to vibrate. The malfunction shut down a third of the plant's equipment, stunting development of the Iranian nuclear program.
Unlike most viruses, Stuxnet didn't target only computers. It attacked equipment controlled by computers.
The marriage of computer science and biology has opened the door for amazing discoveries. With the help of computers, we're decoding the human genome, creating organisms with new capabilities, automating drug development and revolutionizing food safety.
Stuxnet demonstrated that cybersecurity breaches can cause physical damages. What if those damages had biological consequences? Could bioterrorists target government laboratories studying infectious diseases? What about pharmaceutical companies producing lifesaving drugs? As life scientists become more reliant on digital workflows, the chances are likely rising.
The ease of accessing genetic information online has democratized science, enabling amateur scientists in community laboratories to tackle challenges like developing affordable insulin.
But the line between physical DNA sequences and their digital representation is becoming increasingly blurry. Digital information, including malware, can now be stored and transmitted via DNA. The J. Craig Venter Institute even created an entire synthetic genomewatermarked with encoded links and hidden messages.
Twenty years ago, genetic engineers could only create new DNA molecules by stitching together natural DNA molecules. Today scientists can use chemical processes to produce synthetic DNA.
The sequence of these molecules is often generated using software. In the same way that electrical engineers use software to design computer chipsand computer engineers use software to write computer programs, genetic engineers use software to design genes.
That means that access to specific physical samples is no longer necessary to create new biological samples. To say that all you need to create a dangerous human pathogen is internet access would be an overstatement – but only a slight one. For instance, in 2006, a journalist used publicly available data to order a fragment of smallpox DNA in the mail. The year before, the Centers for Disease Control used published DNA sequences as a blueprint to reconstruct the virus responsible for the Spanish flu, one of the deadliest pandemics of all time.
With the help of computers, editing and writing DNA sequences is almost as easy as manipulating text documents. And it can be done with malicious intent.
The conversations around cyberbiosecurity so far have largely focused on doomsday scenarios. The threats are bidirectional.
On the one hand, computer viruses like Stuxnet could be used to hack into digitally controlled machinery in biology labs. DNA could even be used to deliver the attack by encoding malware that is unlocked when the DNA sequences are translated into digital files by a sequencing computer.
On the other hand, bad actors could use software and digital databases to design or reconstruct pathogens. If nefarious agents hacked into sequence databases or digitally designed novel DNA molecules with the intent to cause harm, the results could be catastrophic.
And not all cyberbiosecurity threats are premeditated or criminal. Unintentional errors that occur while translating between a physical DNA molecule and its digital reference are common. These errors might not compromise national security, but they could cause costly delays or product recalls.
Despite these risks, it is not unusual for researchers to order samples from a collaborator or a company and never bother to confirm that the physical sample they receive matches the digital sequence they were expecting.
Infrastructure changes and new technologies could help increase the security of life science workflows. For instance, voluntary screening guidelines are already in place to help DNA synthesis companies screen orders for known pathogens. Universities could institute similar mandatory guidelines for any outgoing DNA synthesis orders.
There is also currently no simple, affordable way to confirm DNA samples by whole genome sequencing. Simplified protocols and user-friendly software could be developed, so that screening by sequencing becomes routine.
The ability to manipulate DNA was once the privilege of the select few and very limited in scope and application. Today, life scientists rely on a global supply chain and a network of computers that manipulate DNA in unprecedented ways. The time to start thinking about the security of the digital/DNA interface is now, not after a new Stuxnet-like cyberbiosecurity breach.
Jenna E. Gallegos, Postdoctoral Researcher in Chemical and Biological Engineering, Colorado State University and Jean Peccoud, Professor, Abell Chair in Synthetic Biology, Colorado State University
Share:

Love at First site

Love at First Sight? It's Probably Just LustWe've all seen that movie moment when two strangers meet and feel an instant romantic connection — in fact, "love at first sight" has been a mainstay of literature for thousands of years, and people in real life often claim to experience a similar spark.
But is that feeling actually love? Not quite, according to the authors of a new study.
In the study, researchers investigated whether people feel love at first sight — LAFS — or whether they believe retroactively that they felt that way, once they've already formed an attachment to a romantic partner. The scientists also questioned whether what people call "love" at a first encounter is truly representative of the complex emotions that make up love — or just a powerful physical attraction
Prior studies have shown that being in love activates certain brain regions, and the location of the activity can vary depending on what type of love the person is feeling, such as emotional, maternal or passionate love. Intense, passionate love activates the same networks in the brain as addiction does, and more long-term love sparked responses in brain regions associated with attachment and reward
Researchers have also previously reported that as many as 1 in 3 people in Western countries claim to have experienced LAFS. And that the feeling is associated with more passion and stronger bonds within the relationship, the scientists wrote in the new study.  
But there was little evidence indicating if LAFS occurred when people thought it did — at the moment of their first meeting ― or if they merely remembered it happening that way through the lens of their current romantic feelings, the study authors explained.
The scientists collected data from about 500 encounters between nearly 400 participants, mostly heterosexual Dutch and German students in their mid-20s. Using three stages of data collection — an online survey, a laboratory study and three dating events lasting up to 90 minutes each — the researchers gathered information from their subjects about meeting prospective romantic partners. They noted whether participants said that they felt something akin to LAFS upon a first meeting, and how physically attractive they ranked the person who inspired those feelings. 
To define what qualified as "love," subjects submitted self-analysis of several key components: "eros" (physical attraction), "intimacy," "passion" and "commitment." During the tests, 32 different individuals reported experiencing LAFS a total of 49 times — and that observation wasn't typically accompanied with high ratings for love components such as intimacy and commitment.
However, reports of LAFS did correspond with a potential partner scoring higher as physically attractive, the researchers discovered. About 60 percent of the study participants were women, but men were more likely to report feeling LAFS "on the spot," the study authors reported. And in every case, their experience of LAFS was unreciprocated, suggesting that mutual, instantaneous LAFS "might generally be rare," according to the study.
The authors determined that LAFS was, in fact, merely "a strong initial attraction" that people identified as love, either at the moment they felt it, or in retrospect. And though some study subjects who were already involved with someone reported that they fell in love at first glance, it's hard to say for sure if that happened the way they remembered. Answering this question would require further investigation into romantic relationships, to see how those initial, powerful feelings of instantaneous love play out over time, the scientists wrote.   
The findings were published online Nov. 17 in the Journal of the International Association for Relationship Research.
Share:

Find out why your brain cell dies after learning a new thing

Do You Know?


When You Learn, Your Brain Swells with New Cells — Then It Kills Them

Every time you learn a skill, new cells burst to life in your brain. Then, one after another, those cells die off as your brain figures out which ones it really needs.
In a new opinion paper, published online Nov. 14 in the journal Trends in Cognitive Sciences, researchers proposed that this swelling and shrinking of the brain is a Darwinian process.
An initial burst of new cells helps the brain deal with new information, according to the paper. Then, the brain works out which of these new cells work best and which are unnecessary, killing off the extras in a survival-of-the-fittest contest. That cull leaves behind only the cells the brain needs to most efficiently maintain what it has learned, the paper said.
The initial swelling or burst of brain cells is "rather small, of course," said lead author Elisabeth Wenger, a researcher at the Center for Lifespan Psychology at the Max Planck Institute for Human Development in Berlin, Germany. "It would be quite impractical to have huge changes" inside the skull.
Researchers have long known that brains change in response to learning. A classic 2003 study, for example, observed major volume differences between the brains of professional and amateur musicians. But the new study is the first time researchers have watched that growth in action over a fairly long timescale, Wenger said, and offered a hypothesis as to how it works.
Wenger and her colleagues had 15 right-handed study subjects learn, over the course of seven weeks, to write with their left hands. The researchers subjected the enterprising learners to magnetic resonance imaging (MRI) brain scans over the study period. The gray matter in the subjects' motor cortices (regions of the brain involved in muscle movement) grew by an additional 2 to 3 percent before shrinking back to its original size, the researchers found.
"It's so hard to observe and detect these volumetric changes, because, as you can imagine, there are also many noise factors that come into play when we measure normal participants in the MRI scanner," Wenger told Live Science. ("Noise" refers to messy, fuzzy artifacts in data that make it difficult for researchers to make precise measurements.)
MRIs use complex physics to peer through the walls of the skull into the brain. But the machines aren't perfect and can introduce errors in fine measurements. And the human brain swells and shrinks for reasons other than learning, Wenger said. For example, your brain is a lot more thick and turgid after a few glasses of water than if you're dehydrated, Wenger said.
That's why it's taken so long for researchers to make good observations of this growth and shrinking over time (or, as the scientists call it, expansion and renormalization), Wenger said. It's also why they can't yet offer more detail as to exactly which cells are multiplying and dying off to cause all that change, she said.
Some mix of neurons and synapses — as well as various other cells that help the brain function — bursts into being as the brain learns. And then some of those cells disappear.
That's all the researchers know so far, though it's enough for them to develop their still-somewhat-rough model of expansion and renormalization. In order to deeply understand exactly how the process works, and what kind of cells are being selected for, the researchers need to study the process at a much finer level of detail, they said in the paper. They need to see which cells are appearing and which are disappearing.
In attempting to do that, however, researchers face the constant challenge of neuroscience: It's not exactly ethical to slice into the skulls of living people and poke around with microscopes and needles.
Wenger said the next steps will involve fine-tuning MRIs to help provide the finer level of detail the scientists need. The researchers will also do some poking around in the brains of animals, where expansion and renormalization is already somewhat better-understood, she added.
Share:

Cremated human remains


Cremated human remains were found inside this ceramic box. An inscription found nearby says that they were buried Jun. 22, 1013 and belong to the Buddha. It is not certain if the statues were buried along with the remains.
Credit: Photo courtesy Chinese Cultural Relics
The cremated remains of what an inscription says is the Buddha, also called Siddhārtha Gautama, have been discovered in a box in Jingchuan County, China, along with more than 260 Buddhist statues.
The translated inscription on the box reads: "The monks Yunjiang and Zhiming of the Lotus School, who belonged to the Mañjuśrī Temple of the Longxing Monastery in Jingzhou Prefecture, gathered more than 2,000 pieces of śarīra [cremated remains of the Buddha], as well as the Buddha's teeth and bones, and buried them in the Mañjuśrī Hall of this temple," on June 22, 1013. At the site where the statues and Buddha remains were buried, archaeologists also found the remains of a structure that could be from the Mañjuśrī Hall.
Yunjiang and Zhiming spent more than 20 years gathering the remains of the Buddha, who is also sometimes referred to as Gautama Buddha, the inscription notes. "In order to promote Buddhism, they wanted to collect śarīra [Buddhist relics]. To reach this goal, both of them practiced the instruction of Buddhism during every moment of their lives for more than 20 years," the inscription says. "Sometimes they received the śarīra from others' donations; sometimes they found them by chance; sometimes they bought them from other places; and sometimes others gave them the śarīra to demonstrate their wholeheartedness."
The inscription does not mention the 260 Buddhist statues that were found buried near the remains of the Buddha. The archaeologists aren't sure whether or not the statues were buried at the same time as the cremated remains, wrote the team of archaeologists, who were led by Hong Wu, a research fellow at the Gansu Provincial Institute of Cultural Relics and Archaeology, in two articles published recently in the journal Chinese Cultural Relics.
The archaeologists did not speculate on whether any of the remains are actually from the Buddha who died around 2,500 years ago. Previous archaeological discoveries in China have also revealed human remains with inscriptions that claim that they belong to the Buddha the archaeologists noted. These include a skull bone, supposedly from the Buddha, found inside a gold chest in Nanjing.
The statues, which are up to 6.6 feet (2 meters) high, were created between the time of the northern Wei dynasty (A.D. 386 to 534) and the Song dynasty (A.D. 960 to 1279), the archaeologists wrote. During that time, Jingchuan County was a transportation hub on the eastern end of the Silk Road, archaeologists said. 
The statues include depictions of the Buddha, bodhisattvas (those who seek enlightenment), arhats (those who have found enlightenment) and deities, known as heavenly kings. Some of the statues only depict the head of the individual, while others are life-size, with some even showing individuals standing on platforms. A few of the statues are steles, which are stone slabs that have a carving within them. Steles are sometimes considered to be a form of statue.
Few of the statues have any writing on them. One holds the date corresponding to May 26, 571, with inscriptions that mention a "disciple Bi Sengqing," who may or may not have created the statue.
"[I] realized that I am confused (…) everyday, because of my admiration of the wisdom of the Buddha, [I] contribute my daily expenses as a tribute, to sculpt a statue of Śākyamuni Buddha, praying for greater longevity, and … reads the inscription, whose next line is not visible.
Villagers discovered the statues and Buddha remains while repairing roads in December 2012 at Gongchi Village in Jingchuan County. Over the following year, archaeologists excavated the remains, detailing their findings in Chinese in 2016 in the journal “Wenwu”. Both articles were recently translated into English and published in the journal Chinese Cultural Relics.
Reference
www.livescience.com

Share:

Change Windows 7 Log on Screen



Change Windows 7 Log on Screen
Hey! Welcome to Team Tech, the coolest texture on every kind of technology trick. Today I am going to discuss on how to change Windows & Logon Background, Windows 7 Logon Background Changer is a free open source software provided that allows you to change the overall wallpaper of your Windows 7 logon screen (also known as "welcome screen", "login screen" or Logon UI).

It works with Windows 7 Home Basic, Home Premium, Professional, Ultimate, Enterprise and Starter, in x86 or x64 (32 or 64 bits). It also works on Windows Server 2008 R2 (It is not advisable to customize a server).

On a side note, this small program is WPF based, it’s a nice technical demo of Windows Presentation Foundation capabilities for those interested in WPF. It requires a decent GPU for the 3D animations to run smoothly.

What does it do?

- It creates a few JPEG files based on the image you want to put as wallpaper for the Windows 7 login screen, applies the appropriate cropping and sizing and saves them using the best compression quality possible. It does NOT change any system file, and the program itself does not requires admin rights to run: it will just ask you to run as admin a very simple cmd file that creates the required folder and registry key with the appropriate rights. Any user of the computer will then be able to change the Windows 7 logon screen wallpaper. You can also prevent users from being able to change the logon screen wallpaper if you don’t want them to be able to modify it without administrator rights (option available by clicking on the "Settings" button).

Download here Win7 Logo Background.zip

What’s your opinion on Changing of Windows Background? Feel free to share your thoughts with us using the comments section below.

If you find the information in this post useful, please share it with your friends on Facebook, Twitter and Google Plus.



Share:

About

Popular Posts

Recent Posts

Pages