This is a video that I found on YouTube which is a highlight of the goals Paul Scholes scored....
Man I hope we meet Liverpool in the Finals so that we can beat them.
因为无聊。因为左脑控制右手在电脑键盘上毫无意义地打击。因为思想的杂音掩盖了出口的话语。因为怀疑一切都是虚构的。因为不知道鸟不拉屎到底等不等於思想便秘。因为害怕死亡。因为脑袋漏出了想法。因为宇宙正在扩张。因为想要得到存在的证明。因为想要把媚妹。因为知道一加一等於二。因为无聊。
世界在破晓的瞬间前埋葬于深渊的黑暗
Religion a figment of human imagination
00:01 28 April 2008
NewScientist.com news service
Andy Coghlan
Humans alone practice religion because they're the only creatures to have evolved imagination.
That's the argument of anthropologist Maurice Bloch of the London School of Economics. Bloch challenges the popular notion that religion evolved and spread because it promoted social bonding, as has been argued by some anthropologists.
Instead, he argues that first, we had to evolve the necessary brain architecture to imagine things and beings that don't physically exist, and the possibility that people somehow live on after they've died.
Once we'd done that, we had access to a form of social interaction unavailable to any other creatures on the planet. Uniquely, humans could use what Bloch calls the "transcendental social" to unify with groups, such as nations and clans, or even with imaginary groups such as the dead. The transcendental social also allows humans to follow the idealised codes of conduct associated with religion.
"What the transcendental social requires is the ability to live very largely in the imagination," Bloch writes.
"One can be a member of a transcendental group, or a nation, even though one never comes in contact with the other members of it," says Bloch. Moreover, the composition of such groups, "whether they are clans or nations, may equally include the living and the dead."
Modern-day religions still embrace this idea of communities bound with the living and the dead, such as the Christian notion of followers being "one body with Christ", or the Islamic "Ummah" uniting Muslims.
Stuck in the here and now
No animals, not even our nearest relatives the chimpanzees, can do this, argues Bloch. Instead, he says, they're restricted to the mundane and Machiavellian social interactions of everyday life, of sparring every day with contemporaries for status and resources.
And the reason is that they can't imagine beyond this immediate social circle, or backwards and forwards in time, in the same way that humans can.
Bloch believes our ancestors developed the necessary neural architecture to imagine before or around 40-50,000 years ago, at a time called the Upper Palaeological Revolution, the final sub-division of the Stone Age.
At around the same time, tools that had been monotonously primitive since the earliest examples appeared 100,000 years earlier suddenly exploded in sophistication, art began appearing on cave walls, and burials began to include artefacts, suggesting belief in an afterlife, and by implication the "transcendental social".
Once humans had crossed this divide, there was no going back.
"The transcendental network can, with no problem, include the dead, ancestors and gods, as well as living role holders and members of essentialised groups," writes Bloch. "Ancestors and gods are compatible with living elders or members of nations because all are equally mysterious invisible, in other words transcendental."
Nothing special
But Bloch argues that religion is only one manifestation of this unique ability to form bonds with non-existent or distant people or value-systems.
"Religious-like phenomena in general are an inseparable part of a key adaptation unique to modern humans, and this is the capacity to imagine other worlds, an adaptation that I argue is the very foundation of the sociality of modern human society."
"Once we realise this omnipresence of the imaginary in the everyday, nothing special is left to explain concerning religion," he says.
Chris Frith of University College London, a co-organiser of a "Sapient Mind" meeting in Cambridge last September, thinks Bloch is right, but that "theory of mind" – the ability to recognise that other people or creatures exist, and think for themselves – might be as important as evolution of imagination.
"As soon as you have theory of mind, you have the possibility of deceiving others, or being deceived," he says. This, in turn, generates a sense of fairness and unfairness, which could lead to moral codes and the possibility of an unseen "enforcer" - God – who can see and punish all wrong-doers.
"Once you have these additions of the imagination, maybe theories of God are inevitable," he says.
Journal reference: Philosophical Transactions of the Royal Society B, (DOI:10.1098/rstb.2008.0007)
The reasonable man adapts himself to the world;
the unreasonable one persists in trying to adapt the world to himself.
Therefore, all progress depends on the unreasonable man.
A new study by researchers at UC Davis shows how our very short-term "working memory," which allows the brain to stitch together sensory information, operates. The system retains a limited number of high-resolution images for a few seconds, rather than a wider range of fuzzier impressions.
Rep. Monique Davis (D-Chicago) interrupted atheist activist Rob Sherman during his testimony Wednesday afternoon before the House State Government Administration Committee in Springfield and told him, "What you have to spew and spread is extremely dangerous . . . it's dangerous for our children to even know that your philosophy exists!
"This is the Land of Lincoln where people believe in God," Davis said. "Get out of that seat . . . You have no right to be here! We believe in something. You believe in destroying! You believe in destroying what this state was built upon."
Apparently it's still open season on some views of God.
Blind to Change, Even as It Stares Us in the Face
By NATALIE ANGIER
Leave it to a vision researcher to make you feel like Mr. Magoo.
When Jeremy Wolfe of Harvard Medical School, speaking last week at a symposium devoted to the crossover theme of Art and Neuroscience, wanted to illustrate how the brain sees the world and how often it fumbles the job, he naturally turned to a great work of art. He flashed a slide of Ellsworth Kelly’s “Study for Colors for a Large Wall” on the screen, and the audience couldn’t help but perk to attention. The checkerboard painting of 64 black, white and colored squares was so whimsically subtle, so poised and propulsive. We drank it in greedily, we scanned every part of it, we loved it, we owned it, and, whoops, time for a test.
Dr. Wolfe flashed another slide of the image, this time with one of the squares highlighted. Was the highlighted square the same color as the original, he asked the audience, or had he altered it? Um, different. No, wait, the same, definitely the same. That square could not now be nor ever have been anything but swimming-pool blue ... could it? The slides flashed by. How about this mustard square here, or that denim one there, or this pink, or that black? We in the audience were at sea and flailed for a strategy. By the end of the series only one thing was clear: We had gazed on Ellsworth Kelly’s masterpiece, but we hadn’t really seen it at all.
The phenomenon that Dr. Wolfe’s Pop Art quiz exemplified is known as change blindness: the frequent inability of our visual system to detect alterations to something staring us straight in the face. The changes needn’t be as modest as a switching of paint chips. At the same meeting, held at the Italian Academy for Advanced Studies in America at Columbia University, the audience failed to notice entire stories disappearing from buildings, or the fact that one poor chicken in a field of dancing cartoon hens had suddenly exploded. In an interview, Dr. Wolfe also recalled a series of experiments in which pedestrians giving directions to a Cornell researcher posing as a lost tourist didn’t notice when, midway through the exchange, the sham tourist was replaced by another person altogether.
Beyond its entertainment value, symposium participants made clear, change blindness is a salient piece in the larger puzzle of visual attentiveness. What is the difference between seeing a scene casually and automatically, as in, you’re at the window and you glance outside at the same old streetscape and nothing registers, versus the focused seeing you’d do if you glanced outside and noticed a sign in the window of your favorite restaurant, and oh no, it’s going out of business because, let’s face it, you always have that Typhoid Mary effect on things. In both cases the same sensory information, the same photonic stream from the external world, is falling on the retinal tissue of your eyes, but the information is processed very differently from one eyeful to the next. What is that difference? At what stage in the complex circuitry of sight do attentiveness and awareness arise, and what happens to other objects in the visual field once a particular object has been designated worthy of a further despairing stare?
Visual attentiveness is born of limited resources. “The basic problem is that far more information lands on your eyes than you can possibly analyze and still end up with a reasonable sized brain,” Dr. Wolfe said. Hence, the brain has evolved mechanisms for combating data overload, allowing large rivers of data to pass along optical and cortical corridors almost entirely unassimilated, and peeling off selected data for a close, careful view. In deciding what to focus on, the brain essentially shines a spotlight from place to place, a rapid, sweeping search that takes in maybe 30 or 40 objects per second, the survey accompanied by a multitude of body movements of which we are barely aware: the darting of the eyes, the constant tiny twists of the torso and neck. We scan and sweep and perfunctorily police, until something sticks out and brings our bouncing cones to a halt.
The mechanisms that succeed in seizing our sightline fall into two basic classes: bottom up and top down. Bottom-up attentiveness originates with the stimulus, with something in our visual field that is the optical equivalent of a shout: a wildly waving hand, a bright red object against a green field. Bottom-up stimuli seem to head straight for the brainstem and are almost impossible to ignore, said Nancy Kanwisher, a vision researcher at M.I.T., and thus they are popular in Internet ads.
Top-down attentiveness, by comparison, is a volitional act, the decision by the viewer that an item, even in the absence of flapping parts or strobe lights, is nonetheless a sight to behold. When you are looking for a specific object — say, your black suitcase on a moving baggage carousel occupied largely by black suitcases — you apply a top-down approach, the bouncing searchlights configured to specific parameters, like a smallish, scuffed black suitcase with one broken wheel. Volitional attentiveness is much trickier to study than is a simple response to a stimulus, yet scientists have made progress through improved brain-scanning technology and the ability to measure the firing patterns of specific neurons or the synchronized firing of clusters of brain cells.
Recent studies with both macaques and humans indicate that attentiveness crackles through the brain along vast, multifocal, transcortical loops, leaping to life in regions at the back of the brain, in the primary visual cortex that engages with the world, proceeding forward into frontal lobes where higher cognitive analysis occurs, and then doubling back to the primary visual centers. En route, the initial signal is amplified, italicized and annotated, and so persuasively that the boosted signal seems to emanate from the object itself. The enhancer effect explains why, if you’ve ever looked at a crowd photo and had somebody point out the face of, say, a young Franklin Roosevelt or George Clooney in the throng, the celebrity’s image will leap out at you thereafter as though lighted from behind.
Whether lured into attentiveness by a bottom-up or top-down mechanism, scientists said, the results of change blindness studies and other experiments strongly suggest that the visual system can focus on only one or very few objects at a time, and that anything lying outside a given moment’s cone of interest gets short shrift. The brain, it seems, is a master at filling gaps and making do, of compiling a cohesive portrait of reality based on a flickering view.
“Our spotlight of attention is grabbing objects at such a fast rate that introspectively it feels like you’re recognizing many things at once,” Dr. Wolfe said. “But the reality is that you are only accurately representing the state of one or a few objects at any given moment.” As for the rest of our visual experience, he said, it has been aptly called “a grand illusion.” Sit back, relax and enjoy the movie called You.