Saturday, December 27, 2014

Wow

Over 20,000 views on the website, that is great.  Probably should have monetized.  Enjoy the history everyone!

Tuesday, December 23, 2014

Happy Break

I hope you are all doing well and dry (hope this rain lets up).  I hope you all have a fantastic break and are safe with any travels you may do.  So many of you were so kind and generous, especially with the sweets so I might need to be rolled into class come the new year.  Thank you all so much though, completely unnecessary but totally appreciated.  A few kiddos went home with thank you cards, but anything from today will need to wait till after break, I couldn't keep up!

Anyways, please remember your children have history projects due the week after break.  5th grade has their projects due on 1/15 and the 6th due on 1/12.  Have a great break!

Monday, December 22, 2014

ON THE TECHNICAL EXPLANATION FOR SANTA CLAUS'S ABILITY TO DELIVER PRESENTS WORLDWIDE IN A SINGLE NIGHT


Many of you know that besides history, I also love science.  This is a fantastic look and the history and science of Santa.  Look here.

Here is a webquest about the origins of Santa!

Friday, December 12, 2014

5th Grade Test

5th Grade will be having their test on the Hebrews on Wednesday, 12/17.  Study guide went out today and can again be found here.


One of the best meteor showers of the year is coming Saturday and Sunday nights

This was found @ http://www.vox.com/2014/12/12/7382275/geminid-meteor-shower#
An uncommonly huge meteor photographed during the 2009 Geminid shower from Victorville, California.(Wally Pacholka / Barcroft Media / Getty Images)
This Saturday and Sunday night, head outside to see one of the year's best meteor showers: the Geminids.
Every year around this time, Earth crosses into a trail of debris thrown off by the asteroid 3200 Phaethon. As these tiny pieces of rock descend through our atmosphere, they burn up, producing shooting stars visible to the naked eye.
The shower will peak early Saturday morning, but experts say that both nights should feature a good number of meteors — somewhere around 50 to 60 per hour. Still, even though the Geminids is annually one of the most consistent and active meteor showers, that only works out to roughly one meteor per minute, so don't head outside with the expectation of seeing a one immediately.
Instead, you should head outside, get comfortable, let your eyes adjust to the dark, and stare at the sky — so you can see the shooting streaks of light that result from Earth colliding with a stream of asteroid dust.

What causes the meteor shower

geminids 2
A photo of the 2013 shower taken in Germany. (Dirk Essl)
The majority of meteor showers are caused by Earth passing through debris left by comets. This one is different, because it's caused by dust from an asteroid, not a comet.
Comets and asteroids are both relatively small pieces of rock that orbit the sun, but there are some differences: comets are chunks of ice and rock, and have very elliptical orbits (so they go extremely far out to the edges of the solar system, then come back in), whereas asteroids are mostly made of rock and metal, and most have more circular orbits.
As a result, Earth seldom crosses the path of most asteroids, because they orbit the Sun farther out, between Mars and Jupiter. 3200 Phaethon, however, has an elliptical orbit that brings it closer in — it sometimes travels farther out than Mars, but also comes extremely close to the Sun (twice as close as Mercury), causing dust to stream from it and form a comet-like tail. This is the reason that scientists disagree over how to categorize 3200 Phaethon, with some calling it a hybrid "rock comet."
phaeton orbit
The orbit of 3200 Phaethon. (Sky and Telescope)
Regardless of its official designation, every year in mid-December, Earth passes through its orbit, encountering a trail of debris. It doesn't pose any danger to us, but it causes a shower of meteors — and because they appear to come from the constellation Gemini, the shower is called the Geminids.
However, though the shower was first observed in 1862, no one knew what caused it until 1983, when NASA's IRAS satellite spotted the asteroid and astronomers realized that its orbit made it a likely candidate. In 2009, another spacecraft observed it ejecting dust as it neared the sun, and most scientists now believe it is responsible for the meteor shower.

How to see the meteor shower

The shower will peak very early Sunday morning — regardless of where you live — but it's expected to produce a fair amount of meteors both Saturday and Sunday nights, starting around 10 pm and lasting until dawn.
The darker it is, the more meteors you'll be able to see, so rural areas are much better than cities. That being said, even within a developed area, just getting away from bright streetlights will help you see the meteors much better.
geminid diagram
Where to look for the meteors. (Sky and Telescope)
Though they'll all appear to come from the Gemini constellation, towards the East (for people in the Northern hemisphere), they'll streak across the entire sky as they burn up in Earth's atmosphere, so anywhere you can get a wide open view of the sky is best.
Even though this is a relatively active shower — in some years, the most active — it will still only feature a meteor every minute or so, and they can be easily missed. So to see them, you'll have to be patient. You won't need a telescope.
Right now, forecasts call for some cloud cover in the Southwest and Southeast, and many parts of the country are also experiencing frigid temperatures. But if clouds or cold are going to ruin the shower for you, there are live streams of the shower — from NASA and the Slooh Community Observatory — online.

Thursday, December 4, 2014

Great Job Tonight!

I am so proud of everyone that performed tonight.  FCCS has some wonderful talent!

Monday, December 1, 2014

Toga time... and work on those projects!

Thanks Ms. Roper for the awesome time.  Latin is awesome with Ms. Roper.

Don't forget, get started on those projects!

Tuesday, November 18, 2014

6th Grade Test and don't forget about projects!


6th Grade Test: Asian Empires Test is on Friday, 11/21.  Study guide is here.  Jeopardy below!



Projects: Both grades are being assigned a Term 2 project.  Below are the project sheets that students are taking home today.

5th Grade Project due January 15th.

6th Grade Project due January 12th.

Friday, November 14, 2014

Quizzes, Tests, and Projects Oh My!

5th Grade: Students are having their tests on their Egypt and Nubia unit on Tuesday, 11/18.  Students already have study guides but for your convenience here is the guide.  Jeopardy is below.



6th Grade: Students have a quiz on Friday 11/14.  This will be on the main ideas of the Mughal and Ottoman Empires.  I am hoping to test around the 25th.

Projects: Both grades are being assigned a Term 2 project.  Below are the project sheets that students are taking home today.

5th Grade Project due January 15th.

6th Grade Project due January 12th.

Tuesday, October 21, 2014

How to view Thursday's solar eclipse with a shoebox

I thought this was super cool, and it isn't every day their is a partial solar eclipse.

You'll also need some aluminum foil and a white sheet of paper to watch as the moon partially occludes the sun on Thursday.
By Miriam KramerSPACE.com Staff Writer OCTOBER 21, 2014

A potentially amazing partial solar eclipse is due to darken skies aboveNorth America Thursday (Oct. 23), and you can build and easy tool to help you view it safely.
It isn't safe to look directly at the sun during any eclipse of the sun, and the partial solar eclipse on Thursday  is no exception. Even though the moon is passing in front of the start from Earth's perspective, the sun is still incredibly bright and looking directly at it can damage any skywatcher's eyesight. Instead of looking straight at Earth's closest star, observers still interested in seeing the partial eclipse can use a pinhole camera— an easy tool made with household items.
A pinhole camera projects sunlight through a small hole in a box onto the other side of the box, allowing you to see a view of the sun safely without risking your eyesight. To create a pinhole camera, all you need is a shoebox, some white paper and foil. Once your creation is ready, you can see the progression of the partial eclipse with ease. [See Our Step-by-Step Guide to Creating a Pinhole Eclipse Viewer]
You can also use some crafty skills and a pair of binoculars to create a solar eclipse projector for Thursday's event. To do that you'll need: a air of binoculars; a tripod or stack of books, duct tape; scissors; and two pieces of cardboard. You can use our video guide to build a solar eclipse projector with binoculars.
Thursday's partial solar eclipse should be visible, weather permitting, to people across a wide swath of North America, and even if you can't catch the eclipse from your part of the world, you can see it live online. The online Slooh Community Observatory will host a live webcast with expert commentary on its websitewww.slooh.com starting at 5 p.m. EDT (2100 GMT) on Thursday. Another webcast will be hosted by the Griffith Observatory in Los Angeles, California. 
Thursday's solar eclipse could act as a preview for another eclipse that should be visible to people around the United States in 2017, according to one eclipse expert.
"This partial eclipse visible to people looking through sun-safe filters on Thursday is a coming attraction for the August 21, 2017, eclipse that will have the moon entirely covering the sun in a 60-mile-wide band across the U.S. from Oregon to South Carolina, with 80 percent or more of the sun covered from most of the continental U.S.," Jay Pasachoff said in a statement.
WARNING: Never stare directly at the sun through binoculars, an unprotected telescope or your unaided eye. Serious eye damage can result. Astronomers use protective filters or solar eclipse glasses to safely observe the sun. 
"The sun is so bright that even through ordinary sunglasses you can damage your eyes if you stare at it," Pasachoff added. "The special solar filters that are available, which are made of a black polymer, block out all but about a thousandth of a percent of the sun's brightness, while ordinary sunglasses would dim the Sun by only a relatively small bit even in the visible while allowing almost all the hazardous infrared to come through."
Editor's Note: If you take an amazing skywatching photo of the solar eclipse or any other night sky view you'd like to share for a possible story or image gallery, please contact managingeditor Tariq Malik atspacephotos@space.com.

Follow Miriam Kramer @mirikramer and Google+. Follow us @SpacedotcomFacebook and Google+. Original article on Space.com.

Monday, October 20, 2014

Chicken Mummies


We've been having a great time with our Chicken Mummies project.  Thank you so much for all of the donations to make this happen.  We should be taking our pharaohs home in a few weeks.  We still will need salt, thanks!

Friday, October 17, 2014

Cool Interactive Mongol Map

Time to start looking at the exception (don't worry the students will get that joke)
Interactive Mongol Map

Thursday, October 16, 2014

Egyptian Mythology Webquest


Students, please click here to access your webquest.


Wednesday, October 15, 2014

Google Maps, behind the scenes of the pyramids

Super cool website found by the Somani family, thanks for sharing.  I think you all will find it pretty neat.

Pyramids of Giza

Monday, October 13, 2014

6th Grade Test!

All the kids have their study guides are well on their way of finishing them up.
Jeopardy


P.S. Don't forget your permission slips at activity fee!

Wednesday, October 8, 2014

“Dracula’s Dungeon” Unearthed in Turkey

“Dracula’s Dungeon” Unearthed in Turkey

By Christopher Klein
1
Turkish archaeologists say they have discovered the dungeons that once held the 15th-century Romanian ruler Vlad the Impaler, credited as being the real-life inspiration for Bram Stoker’s classic horror tale “Dracula.”
vlad the impaler
Credit: DeAgostini/Getty Images
In 1442, the ruler of Wallachia (now part of present-day Romania) embarked on a diplomatic mission into the heart of the Ottoman Empire. It was a leap of faith for Vlad II, who had pledged to defend Christianity in Eastern Europe against the Ottomans 11 years earlier when he joined the fellowship of knights known as the Order of the Dragon. Now, however, the man who had been given the surname Dracul (which means “dragon” in Romanian) by his fellow knights needed the help of the Ottoman Sultan Murad II to fight a rival from the neighboring territory of Transylvania, and he journeyed to make his plea in person along with his two princes—7-year-old Radu and 11-year-old Vlad III, also known by the patronymic name Dracula (“son of Dracul”).
Vlad II ultimately received the military support he sought from the Ottomans, but it came at a price. In addition to an annual tribute, the Wallachian ruler agreed to leave his two sons behind as political prisoners to ensure his loyalty. The boys were held hostage in a picturesque citadel high atop a rocky precipice lording over the town of Tokat, which had been conquered by the Seljuk Turks at the end of the 12th century and incorporated into the Ottoman Empire in 1392. During his five years of captivity inside the fortress, the bile festered inside young Vlad III and his hatred of the Ottomans surged. After his release and eventual succession to the Wallachian throne, the older prince’s venom against the Ottoman Empire would be unleashed in such a brutal fashion that centuries later he is known simply as Vlad the Impaler and the real-life inspiration for a classic horror tale.
Now, according to Turkish newspaper Hurriyet Daily News, archaeologists working on the restoration of Tokat Castle in northern Turkey have discovered two dungeons where the Ottomans held Vlad the Impaler hostage. The dungeons inside the ancient fortress were “built like a prison,” archaeologist Ibrahim Cetin told the Turkish newspaper. “It is hard to estimate in which room Dracula was kept,” Cetin admitted, “but he was around here.”
In addition to the two dungeons that held Dracula, archaeologists have also unearthed a military shelter and a secret tunnel believed to have been used to access a nearby Roman bath. “The castle is completely surrounded by secret tunnels,” Cetin said. “It is very mysterious.”
What isn’t as mysterious is what happened to the Transylvania-born Vlad III after his release from Tokat Castle around the time his father and older brother Mircea were brutally killed in 1447. He ascended to the throne in 1456 and maintained his barbaric rule through torture, mutilation and mass murder. Victims were disemboweled, beheaded and skinned or boiled alive.
By 1462 he was at war with the Ottomans. With the enemy on the advance with a force three times the size of his own, Vlad III hid in the Romanian forests and relied on savage guerilla tactics. His forces poisoned wells, burned crops and paid diseased men to infiltrate Ottoman ranks and pass along their pestilence. It was a gruesome mass killing, however, that led to his posthumous nickname when he ordered 20,000 defeated Ottomans to be impaled on wooden stakes outside the city of Targoviste. When a horrified Sultan Mehmed II came upon the forest of the dead being picked apart by crows, he retreated to Constantinople.
Hungarian forces captured Vlad the Impaler later that year, and he was imprisoned for the second time in his life. Most historians believe his later captivity occurred in Romania and lasted more than a decade, although the exact location and length have been disputed. Vlad the Impaler reclaimed the Wallachian throne after the death of his younger brother Radu in 1475, but it was a short-lived reign as he was believed to have been killed in battle against the Ottomans in 1476.
The legend of Vlad the Impaler’s brutality grew after his death as stories spread that he dined on the impaled bodies of his victims and even dipped his bread into their blood. The dark tales apparently served as inspiration for Irish novelist Bram Stoker who in 1897 penned a Gothic novel about a vampire who shared a Transylvanian birthplace and nickname with Vlad the Impaler—Dracula.

Monday, October 6, 2014

Walters Art Gallery Fieldtrip

Hello everyone, permission slips went out today for the Walters Art Gallery Field trip.  Please check your child's take home folders today.

Unfortunately, the bulk email functionality of our grading software is still not working.  They tell me they are working on it!

Don't forget, 5th grade test on Wednesday!

Monday, September 29, 2014

Upcoming Quizzes and Tests

5th Grade: Quiz Wednesday, 10/01/14 - Mesopotamia, Sumer, Cuneiform, and Hammurabi.
                  Test Wednesday, 10/07/14 - Study Guide - Jeopardy below

6th Grade: "Pop" Quiz Tuesday 9/30/14 -Geography of Arabian Peninsula and Beginnings of Islam

Interesting Video debunking some myths about fighting in armor

I thought some of you may enjoy this video of two individuals showing what fighting in late medieval armor really would be like.  It really dispels the myth about lack of mobility.  Pardon the French... no seriously, the the video is in French.


Thursday, September 11, 2014

Upcoming Quiz and Test

5th Grade: Paleolithic and Neolithic quiz on Tuesday 9/16.

6th Grade: Byzantine Empire and the Rise of Christianity Test on Thursday 9/18.  The study guide is found here.  The jeopardy is below.

Wednesday, September 3, 2014

5th Grade Quiz and upcoming Project

Hello everyone, this announcement is for the 5th grade. They will have a short quiz on Friday concerning their geography, 5 themes of geography, geographic features, and timelines. They will receive a study guide on Thursday. It can be found here. Typically quizzes will not have a study guide, but this is the first one, so I want them to see my expectations.

5th Grade will also be assignment a short term project, due Friday 9/12/14. Students will be creating a map from school to home. They will receive this handout.

Tuesday, September 2, 2014

Thursday, August 28, 2014

Online Textbook now available!

Hello everyone, this morning I was able to take the time (more than it should!) to set up the online textbooks for 5th and 6th grade.  To the left, you should see a link to our Online Textbook.  If you click there, and for a username put in first.lastname14 and then fccs2014 as the password your child will have access to the Online Textbook.  An example of this would be, collin.kenny14 as the username and fccs2014 as my password.  Students with hyphenated last names, such as if my name was Collin Kenny-Kenny, would look like collin.kennykenny14.  Have a great day!

Monday, August 25, 2014

Great First Day!

Wow, what a wonderful day.  It was a joy meeting (or seeing again) your students.  I can't wait for this year.  Please fill out this informational form so I can have the best ways of contacting you, thanks!

Thursday, August 21, 2014

Is Google Making Us Stupid?

The original article can be found here.

What the Internet is doing to our brains

Illustration by Guy Billout
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets’reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)
For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.
I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?”
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace  anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”
Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits , conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:
It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”
Also see:
Living With a Computer (July 1982)
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows
“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”
As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural criticLewis Mumford  described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”
The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum  observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on cognition. In apaper published in 1936, the British mathematician Alan Turing  proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.
When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.
The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.
Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.
About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor  carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman  eloquently described what’s at stake:
I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”
As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Nicholas Carr’s most recent book, The Big Switch: Rewiring the World, From Edison to Google, was published earlier this year.