stuffnads, local and safe classifieds market in the USA.

♥ Saturday, December 14 2013 - Zac Brown Band Tickets in Erie, Pennsylvania For Sale

♥ Saturday, December 14 2013 - Zac Brown Band Tickets
Price: $1
Seller:
Type: Tickets & Traveling, For Sale - Private.

Zac Brown Band TICKETS
Erie Insurance Arena
Erie, PA
Saturday, December 14 xxxx
View Zac Brown Band Tickets at Erie Insurance Arena
Call Online Ticket window Toll Free (855) 730-xxxx
The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic general-purpose computer. It combined, for the first time, the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract xxxx times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from xxxx to full operation at the end of xxxx. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[61] One of the major engineering feats was to minimize tube burnout, which was a common problem at that time. The machine was in almost constant use for the next ten years.The machine was not intended to be a practical computer but was instead designed as a testbed for the Williams tube, an early form of computer memory. Although considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[63] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[64]The first commercial computer was the Ferranti Mark 1, which was delivered to the University of Manchester in February xxxx. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves).[65] A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between xxxx and xxxx, one of them to Shell labs in Amsterdam.[66]In June xxxx, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than $1 million each ($8.99 million as of xxxx).[69] UNIVAC was the first "mass produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11decimal digits plus sign (72-bit words). A key feature of the UNIVAC system was a newly invented type of metal magnetic tape, and a high-speed tape unit, for non-volatile storage. Magnetic tape is still used in many computers.[70] In xxxx, IBM publicly announced the IBM 701 Electronic Data Processing Machine, the first in its successful 700/xxxx series and its first IBM mainframe computer. The IBM 704, introduced in xxxx, used magnetic core memory, which became the standard for large machines. The first implemented high-level general purpose programming language, Fortran, was also being developed at IBM for the 704 during xxxx and xxxx and released in early xxxx. (Konrad Zuse's xxxx design of the high-level language Plankalkül was not implemented at that time.) A volunteer user group, which exists to this day, was founded in xxxx to share their software and experiences with the IBM 701.IBM introduced a smaller, more affordable computer in xxxx that proved very popular.[71] The IBM 650 weighed over 900 kg, the attached power supply weighed around xxxx kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. It cost $500,000[72] ($4.35 million as of xxxx) or could be leased for $3,500 a month ($30 thousand as of xxxx).[69] Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture: the instruction format included the address of the next instruction; and software: the Symbolic Optimal Assembly Program, SOAP,[73] assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was not required.Even before the ENIAC was finished, Eckert and Mauchly recognized its limitations and started the design of a stored-program computer, EDVAC. John von Neumann was credited with a widely circulated report describing the EDVAC design in which both the programs and working data were stored in a single, unified store. This basic design, denoted the von Neumann architecture, would serve as the foundation for the worldwide development of ENIAC's successors.[79] In this generation of equipment, temporary or working storage was provided by acoustic delay lines, which used the propagation time of sound through a medium such as liquid mercury (or through a wire) to briefly store data. A series of acoustic pulses is sent along a tube; after a time, as the pulse reached the end of the tube, the circuitry detected whether the pulse represented a 1 or 0 and caused the oscillator to re-send the pulse. Others used Williams tubes, which use the ability of a small cathode-ray tube (CRT) to store and retrieve data as charged areas on the phosphor screen. By xxxx, magnetic core memory[80] was rapidly displacing most other forms of temporary storage, and dominated the field through the mid-xxxxs.EDVAC was the first stored-program computer designed; however it was not the first to run. Eckert and Mauchly left the project and its construction floundered. The first working von Neumann machine was the Manchester "Baby" or Small-Scale Experimental Machine, developed by Frederic C. Williams and Tom Kilburn at the University of Manchester in xxxx as a test bed for the Williams tube;[81] it was followed in xxxx by the Manchester Mark 1 computer, a complete system, using Williams tube and magnetic drum memory, and introducing index registers.[82] The other contender for the title "first digital stored-program computer" had been EDSAC, designed and constructed at the University of Cambridge. Operational less than one year after the Manchester "Baby", it was also capable of tackling real problems. EDSAC was actually inspired by plans for EDVAC (Electronic Discrete Variable Automatic Computer), the successor to ENIAC; these plans were already in place by the time ENIAC was successfully operational. Unlike ENIAC, which used parallel processing, EDVAC used a single processing unit. This design was simpler and was the first to be implemented in each succeeding wave of miniaturization, and increased reliability. Some view Manchester Mark 1 / EDSAC / EDVAC as the "Eves" from which nearly all current computers derive their architecture. Manchester University's machine became the prototype for the Ferranti Mark 1. The first Ferranti Mark 1 machine was delivered to the University in February xxxx and at least nine others were sold between xxxx and xxxx.The bipolar transistor was invented in xxxx. From xxxx onwards transistors replaced vacuum tubes in computer designs,[84] giving rise to the "second generation" of computers. Initially the only devices available were germanium point-contact transistors, which although less reliable than the vacuum tubes they replaced had the advantage of consuming far less power.[85] The first transistorised computer was built at the University of Manchester and was operational by xxxx;[86] a second version was completed there in April xxxx. The later machine used 200 transistors and 1,300 solid-state diodes and had a power consumption of 150 watts. However, it still required valves to generate the clock waveforms at 125 kHz and to read and write on the magnetic drum memory, whereas the Harwell CADET operated without any valves by using a lower clock frequency, of 58 kHz when it became operational in February xxxx.[87] Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available.[88]Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk pack can be easily exchanged with another pack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.The explosion in the use of computers began with "third-generation" computers, making use of Jack St. Clair Kilby's[92] and Robert Noyce's[93] independent invention of the integrated circuit (or microchip), which led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel xxxx,[94] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[95]During the xxxxs there was considerable overlap between second and third generation technologies.[96] IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in xxxx. As late as xxxx, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the Bxxxx were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities.[97] It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (xxxx) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of the microcomputer, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the xxxxs, became ubiquitous in the xxxxs and beyond.Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from xxxx to xxxx, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable ? installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform.[98] Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.[99][100]In the 21st century, multi-core CPUs became commercially available.[101] Content-addressable memory (CAM)[102] has become inexpensive enough to be used in networking, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the xxxxs, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current during the 'transition' between logic states, except for leakage.This has allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. The thermal design power which is dissipated during operation has become as essential as computing speed of operation. In xxxx servers consumed 1.5% of the total energy budget of the U.S.[103] The energy consumption of computer data centers was expected to double to 3% of world consumption by xxxx. The SoC (system on a chip) has compressed even more of the integrated circuitry into a single chip. Computing hardware and its software have even become a metaphor for the operation of the universe.[104] Although DNA-based computing and quantum computing are years or decades in the future, the infrastructure is being laid today, for example, with DNA origami on photolithography[105] and with quantum antennae for transferring information between ion traps.[106] By xxxx, researchers had entangled 14 qubits.[107] Fast digital circuits (including those based on Josephson junctions and rapid single flux quantum technology) are becoming more nearly realizable with the discovery of nanoscale superconductors.[108]The history of the modern computer begins with two separate technologies, automated calculation and programmability. However no single device can be identified as the earliest computer, partly because of the inconsistent application of that term.[4] A few precursors are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around xxxx BC[5] of which a descendant won a speed competition against a contemporary desk calculating machine in Japan in xxxx,[6] the slide rules, invented in the xxxxs, which were carried on five Apollo space missions, including to the moon[7] and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical analog computer built by the Greeks around 80 BC.[8] The Greek mathematician Hero of Alexandria (c. 10?70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[9] This is the essence of programmability.Blaise Pascal invented the mechanical calculator in xxxx,[12] known as Pascal's calculator. It was the first machine to better human performance of arithmetical computations[13] and would turn out to be the only functional mechanical calculator in the 17th century.[14] Two hundred years later, in xxxx, Thomas de Colmar released, after thirty years of development, his simplified arithmometer; it became the first machine to be commercialized because it was strong enough and reliable enough to be used daily in an office environment. The mechanical calculator was at the root of the development of computers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators[15] that the computer was first theorized by Charles Babbage[16][17] and then developed.[18] Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel[19] of the first commercially available microprocessor integrated circuit.In xxxx, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmabilityIt was the fusion of automatic calculation with programmability that produced the first recognizable computers. In xxxx, Charles Babbage, "the actual father of the computer",[20] was the first to conceptualize and design a fully programmable mechanical calculator,[21] his analytical engine.[22] Babbage started in xxxx. Initially he was to program his analytical engine with drums similar to the ones used in Vaucanson's automata which by design were limited in size, but soon he replaced them by Jacquard's card readers, one for data and one for the program. "The introduction of punched cards into the new engine was important not only as a more convenient form of control than the drums, or because programs could now be of unlimited extent, and could be stored and repeated without the danger of introducing errors in setting the machine by hand; it was important also because it served to crystallize Babbage's feeling that he had invented something really new, something much more than a sophisticated calculating machine."[23]After this breakthrough, he redesigned his difference engine (No. 2, still not programmable) incorporating his new ideas. Allan Bromley came to the science museum of London starting in xxxx to study Babbage's engines and determined that difference engine No. 2 was the only engine that had a complete enough set of drawings to be built, and he convinced the museum to do it. This engine, finished in xxxx, proved without doubt the validity of Charles Babbage's work.[25] Except for a pause between xxxx and xxxx, Babbage would spend the rest of his life simplifying each part of his engine: "Gradually he developed plans for Engines of great logical power and elegant simplicity (although the term 'simple' is used here in a purely relative sense)."[26]Between xxxx and xxxx, Ada Lovelace, an analyst of Charles Babbage's analytical engine, translated an article by Italian military engineer Luigi Menabrea on the engine, which she supplemented with an elaborate set of notes of her own. These notes contained what is considered the first computer program ? that is, an algorithm encoded for processing by a machine. She also stated: ?We may say most aptly, that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.?; furthermore she developed a vision on the capability of computers to go beyond mere calculating or number-crunching[28] claiming that: should ?...the fundamental relations of pitched sounds in the science of harmony and of musical composition...? be susceptible ?...of adaptations to the action of the operating notation and mechanism of the engine...? it ?...might compose elaborate and scientific pieces of music of any degree of complexity or extent?. [29]In the late xxxxs, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. ?After some initial trials with paper tape, he settled on punched cards...?[30] To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the xxxx United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of ideas and technologies, that would later prove useful in the realization of practical computers, had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the teleprinter.Howard Aiken wanted to build a giant calculator and was looking for a sponsor to build it. He first presented his design to the Monroe Calculator Company and then to Harvard University, both without success. Carmello Lanza, a technician in Harvard's physics laboratory who had heard Aiken's presentation "...couldn't see why in the world I (Howard Aiken) wanted to do anything like this in the Physics laboratory, because we already had such a machine and nobody used it... Lanza led him up into the attic... There, sure enough... were the wheels that Aiken later put on display in the lobby of the Computer Laboratory. With them was a letter from Henry Prevost Babbage describing these wheels as part of his father's proposed calculating engine. This was the first time Aiken ever heard of Babbage he said, and it was this experience that led him to look up Babbage in the library and to come across his autobiography"[32] which gave a description of his analytical engine.[24]The Atanasoff?Berry Computer (ABC) was the world's first electronic digital computer, albeit not programmable.[41] Atanasoff is considered to be one of the fathers of the computer.[42] Conceived in xxxx by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry,[43] the machine was not programmable, being designed only to solve systems of linear equations. The computer did employ parallel computation. A xxxx court ruling in a patent dispute found that the patent for the xxxx ENIAC computer derived from the Atanasoff?Berry Computer.Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the ?stored-program architecture? or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in xxxx. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of which was completed in xxxx at the University of Manchester in England, the Manchester Small-Scale Experimental Machine (SSEM or ?Baby?). The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paper?EDVAC?was completed but did not see full-time use for an additional two years.Computers using vacuum tubes as their electronic elements were in use throughout the xxxxs, but by the xxxxs they had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorized computer was demonstrated at the University of Manchester in xxxx.[52] In the xxxxs, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel xxxx, further decreased size and cost and further increased speed and reliability of computers. By the late xxxxs, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The xxxxs witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.[citation needed]In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called ?jump? instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that ?remembers? the location it jumped from and another instruction to return to the instruction following that jump instruction.In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually ?compiled? into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[58] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered ?address? and can store a single number. The computer can be instructed to ?put the number 123 into the cell numbered xxxx? or to ?add the number that is in cell xxxx to the number that is in cell xxxx and put the answer into cell xxxx.? The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[63]One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running ?at the same time,? then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed ?time-sharing? since each program is allocated a ?slice? of time in turn.[67]In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the xxxxs the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. ?Wireless? networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.Indeed, when John Napier discovered logarithms for computational purposes in the early 17th century, there followed a period of considerable progress by inventors and scientists in making calculating tools. The apex of this early era of formal computing can be seen in the difference engine and its successor the analytical engine (which was never completely constructed but was designed in detail), both by Charles Babbage. The analytical engine combined concepts from his work and that of others to create a device that if constructed as designed would have possessed many properties of a modern electronic computer. These properties include such features as an internal "scratch memory" equivalent to RAM, multiple forms of output including a bell, a graph-plotter, and simple printer, and a programmable input-output "hard" memory of punch cards which it could modify as well as read. The key advancement which Babbage's devices possessed beyond those created before his was that each component of the device was independent of the rest of the machine, much like the components of a modern electronic computer. This was a fundamental shift in thought; previous computational devices served only a single purpose, but had to be at best disassembled and reconfigured to solve a new problem. Babbage's devices could be reprogramed to solve new problems by the entry of new data, and act upon previous calculations within the same series of instructions. Ada Lovelace took this concept one step further, by creating a program for the analytical engine to calculate Bernoulli numbers, a complex calculation requiring a recursive algorithm. This is considered to the first example of a true computer program, a series of instructions that act upon data not known in full until the program is run.During the Middle Ages, several European philosophers made attempts to produce analog computer devices. Influenced by the Arabs and Scholasticism, Majorcan philosopher Ramon Llull (xxxx?xxxx) devoted a great part of his life to defining and designing several logical machines that, by combining simple and undeniable philosophical truths, could produce all possible knowledge. These machines were never actually built, as they were more of a thought experiment to produce new knowledge in systematic ways; although they could make simple logical operations, they still needed a human being for the interpretation of results. Moreover, they lacked a versatile architecture, each machine serving only very concrete purposes. In spite of this, Llull's work had a strong influence on Gottfried Leibniz (early 18th century), who developed his ideas further, and built several calculating tools using them.By the High Middle Ages, the positional Hindu-Arabic numeral system had reached Europe, which allowed for systematic computation of numbers. During this period, the representation of a calculation on paper actually allowed calculation of mathematical expressions, and the tabulation of mathematical functions such as the square root and the common logarithm (for use in multiplication and division) and the trigonometric functions. By the time of Isaac Newton's research, paper or vellum was an important computing resource, and even in our present time, researchers like Enrico Fermi would cover random scraps of paper with calculation, to satisfy their curiosity about an equation.[citation needed] Even into the period of programmable calculators, Richard Feynman would unhesitatingly compute any steps which overflowed the memory of the calculators, by hand, just to learn the answer.[citatioDuring the second half of the xxxxs, the joys of 'surfing the net,' began to excite the interest of people beyond the professional computer-using communities [...] However, the existing computer networks were largely in government, higher education and business. They were not a free good and were not open to hobbyists or private firms that did not have access to a host computer. To fill this gap, a number of firms such as CompuServe, Prodigy, GEnie, and America Online sprang up to provide low cost network access [...] While these networks gave access to Internet for e-mail (typically on a pay-per-message basis), they did not give the ordinary citizen access to the full range of the Internet, or to the glories of gopherspace or the World Wide Web. In a country whose Constitution enshrines freedom of information, most of its citizens were effectively locked out of the library of the future. The Internet was no longer a technical issue, but a political one.John Palfrey and Urs Gasser's Born Digital is a book that deals with the emergence of a generation of Digital Natives. According to the authors, Digital Natives are the generation born after xxxx. They have grown up with a strong internet presence and have never known life without a web presence. The book is primarily targeted at individuals who are parents and teachers of Digital Natives. It provides a broad survey of relevant issues generated by the advent of the web and digital technologies. The authors don't spend too much time on one topic, instead they cover a lot of ground providing insight to many issues that Digital Natives face today. The purpose of the book and indeed its strength revolves around creating awareness rather than on focused argument.The first four chapters, "Identity," "Dossiers," "Privacy," and "Safety," deal with the relationship between digitized data and individual privacy. Chapter 4 deals with the mounting concern of abundant violent and sexual imagery. Digital natives are constantly reinventing and expanding the offline social sphere by creating profiles on social networking websites such as MySpace and Facebook. They tend to take greater risks by providing personal information on these sites as well as with other websites. What happens to personal information over time? Information may be secure but for how long? According to Palfrey and Gasser, the security of information is a mounting concern that can't be answered yet. In "Privacy," Palfrey and Gasser raise important questions concerning privacy. Everyday Digital Natives cede more and more information to various websites without any notion of what may done with the information at a later date. What are the ramifications of so much data being in the hands of other people? Read more ?There is nothing more important than the safety of our children. There is also nothing more important than the education, creativity and innovation that has been, and can still further be, unleashed and harnessed with suitably crafted policies, and incentives, focused on the issues surrounding their use of digital media and other digital technologies, whether such policies and incentives come from parents, teachers, librarians, governments, lawmakers, or social media or other Internet-focused companies. These are some of the key subjects covered in Born Digital. But to begin to grapple with these issues, as the authors inform us, we must first understand Digital Natives.The term "Digital Natives" is used, generally, to refer to people born after xxxx. The book Born Digital is about the issues surrounding Digital Natives and their intensive use of digital media and other digital technologies. Digital Natives were born into a world that was already pervasively digital. Assuming they were born into an advanced industrial economy - and are not otherwise at the low end of the participation or technological gap, Digital Natives did not transition from an analog world to a digital world as most of us have.Born Digital is especially focused on the issues surrounding Digital Natives' intensive use of the Internet and online social networks (like Facebook and MySpace) and other digital tools and media they use on a daily basis (such as instant messaging, texting, online chat rooms, video games, YouTube, etc.). We are no longer living in an analog world. The world - especially as experienced from the viewpoint of children and young adults who have access to these technologies - is now - but more importantly has been for them since they were born - digital.He thought for a minute. ?I was looking at the email of Pentagon generals when I was 17. That does something to you when you?re a young man. That?s a defining experience. I think you can see what sort of a person you?ll become based on your first experience with power as a young man. That was mine and once you?ve had power like that it?s hard to give it up.?The intellectual property negotiation in the Trans-Pacific Partnership discussions has not been completed and a final text has not been agreed to," said Guthrie. "We are working with Congress, stakeholders, and our TPP negotiating partners to reach an outcome that promotes high-paying jobs in innovative American industries and reflects our values, including by seeking strong and balanced copyright protections, as well as advancing access to medicines while incentivizing the development of new, life-saving drugs.?Twist and shout! The Russians did a big favour for the freedom-loving peoples of the world, including those in the US who can still think with our own brains. The self-righteous pundits who complain about Russia's own human rights record, as if this were even remotely relevant, might try to recall how Snowden ended up there in the first place. He was passing through Moscow on his way to South America, and it was only by virtue of Washington's "gross violations of his human rights," as Amnesty International called it, that he got stuck there.Indeed, the whole chase scene is symbolic of the difficulties in which Washington finds itself immersed. Unable to win their case in the court of public opinion, the self-styled leaders of the free world resort to threats and bullying to get their way - which kind of sums up American foreign policy in the second decade of the 21st century. And the spectacle of US attorney general Eric Holder trying to offer Russia assurances that his government would not torture or execute Snowden speaks volumes about how far the US government's reputation on human rights - even within the United States - has plummeted over the past decade.Two weeks ago there was a surprisingly close call in the US House of Representatives, with the majority of House Democrats and 94 of 234 Republicans defying their House (and Senate) leadership, the White House, and the national security establishment in a vote to end the NSA's mass collection of phone records. The amendment was narrowly defeated by a vote of 205 to 217, but it was clear that "this is only the beginning," as John Conyers (D-MI), ranking Democrat on the Judiciary Committee announced. Then Glenn Greenwald broke the story of the NSA's XKeyscore programme, the "widest reaching" of its secret surveillance systems, based on Snowden's revelations. Greenwald has become a one-man army, swatting down attackers from the national security/journalistic establishment like a hero from a video game. Here you can see him wipe the floor with CNN's Jeffrey Toobin, or David Gregory of Meet the Press; or the most devastating takedown ever of a Washington Post journalist, Walter Pincus, who had to run a massive correction after promoting a false, far-fetched conspiracy theory about Greenwald and Wikileaks. If Snowden really leaked information that harmed US national security, why haven't any of these "really very smart" people been fired? Are we to believe that punishing this whistleblower is important enough to damage relations with other countries and put at risk all kinds of foreign policy goals, but the breach of security isn't enough for anyone important to be fired? Or is this another indication, like the generals telling Obama what his options were in Afghanistan, of the increasing power of the military/national security apparatus over our elected officials?A (Benkler): It is - journalism is made up of many things. WikiLeaks doesn't do interviews and pound the pavement. Again, when we say WikiLeaks, we're really talking about before the severe degradation that followed the attack on the organisation that we described just before. WikiLeaks was a solution to a very particular and critical component of the way in which investigative journalism, muck-raking confine instances of corruption. It's - we don't only live from Pentagon papers or Watergate or the NSA wire tapping scandals of xxxx and the more recent months. But it's a clear, distinct component of what in the history of journalism we see as high points, where journalists are able to come in and say, here's a system operating in a way that is obscure to the public and now we're able to shine the light. That's what WikiLeaks showed how to do for the network public sphere. WikiLeaks may fail in the future because of all these events, but the model of some form of decentralised leaking, that is secure technologically and allows for collaboration among different media in different countries, that's going to survive and somebody else will build it. But WikiLeaks played that critical role of that particular critical component of what muck-raking and investigative journalism has always done. While it would be a stretch to say that September 11, xxxx was the genesis date for groups such as WikiLeaks and Anonymous, it would nevertheless be fair to suggest that the range of domestic (US) and geo-political events that followed those attacks 12 years ago had a profound effect upon global activism: from the invasions of Afghanistan and Iraq, the occupations of those two countries, Abu Ghraib, Guantanamo, the Bush presidency, the London and Madrid bombings, the global War on Terror, The Patriot Act, to PRISM. In all of these cases, from the attacks themselves to the passage of restrictive censorship and privacy legislation, an understanding of "workings" and "process" was (and remains) fundamental.The territory that now constitutes England, a country within the United Kingdom, was inhabited by ancient humans 800,000 years ago as the discovery of flint tools at Happisburgh in Norfolk has revealed.[1] The earliest evidence for early modern humans in North West Europe is a jawbone discovered in Devon at Kents Cavern in xxxx, which was re-dated in xxxx to between 41,000 and 44,000 years old.[2] Continuous human habitation dates to around 12,000 years ago, at the end of the last glacial period. The region has numerous remains from the Mesolithic, Neolithic, and Bronze Age, such as Stonehenge and Avebury. In the Iron Age, England, like all of Britain south of the Firth of Forth, was inhabited by the Celtic people known as the Britons, but also by some Belgae tribes (e.g. the Atrebates, the Catuvellauni, the Trinovantes, etc.) in the south east. In AD 43 the Roman conquest of Britain began; the Romans maintained control of their province of Britannia through to the 5th century.The end of Roman rule in Britain enabled the Anglo-Saxon settlement of Britain, which is often regarded as the origin of England and the English people. The Anglo-Saxons, a collection of various Germanic peoples, established several kingdoms that became the primary powers in what is now England and parts of southern Scotland.[3] They introduced the Old English language, which displaced the previous British language. The Anglo-Saxons warred with British successor states in Wales, Cornwall, and the Hen Ogledd (Old North; the Brythonic-speaking parts of northern England and southern Scotland), as well as with each other. Raids by the Vikings were frequent after about AD 800, and the Norsemen took control of large parts of what is now England. During this period several rulers attempted to unite the various Anglo-Saxon kingdoms, an effort that led to the emergence of the Kingdom of England by the 10th century.In xxxx, the Normans invaded and conquered England. The Norman Dynasty established by William the Conquerer ruled England for over half a century before the period of succession crisis known as The Anarchy. Following the Anarchy, England came to be ruled by the House of Plantagenet, a dynasty which also had claims to the Kingdom of France; a succession crisis in France led to the Hundred Years Wars, a series of conflicts involving the peoples and leaders of both nations. Following the Hundred Years Wars, England became embroiled in its own succession wars; the War of the Roses pitted two branches of the House of Plantagenet against one another, the House of York and the House of Lancaster. Henry Tudor ended the War of the Roses and established the Tudor dynasty.Under the Tudors and later Stuart dynasty, England became a world colonial power. During the rule of the Stuarts, England fought the English Civil War, which resulted in the execution of King Charles I and the establishment of a series of republican governments, first a Parliamentary republic known as the Commonwealth of England, then as a military dictatorship under Oliver Cromwell known as The Protectorate. The Stuarts were restored to the throne in xxxx, though continued questions over religion resulted in the deposition of another Stuart king, James II, in the Glorious Revolution. England, which had conquered Wales in the 12th century, was united with Scotland in the early 18th century to form a new sovereign state called Great Britain.[4][5][6] Following the Industrial Revolution, Great Britain ruled a worldwide Empire, the largest in recorded history. Following a process of decolonisation in the 20th century the vast majority of the empire became independent; however, its cultural impact is widespread and deep in many countries of the present day.The time from Britain's first inhabitation until the last glacial maximum is known as the Old Stone Age, or Palaeolithic. Archaeological evidence indicates that what was to become England was colonised by humans long before the rest of the British Isles because of its more hospitable climate between and during the various glacial periods of the distant past. This earliest evidence, from Boxgrove in Sussex, points to dates of 800,000 BP. These earliest inhabitants were hunter-gatherers, who made their living from hunting game and gathering edible plants. Low sea-levels meant that Britain was still attached to the continent for much of this earliest period of history, and varying temperatures over tens of thousands of years meant that it was not always inhabited at all.[7]The last Ice Age ended around 10,000 BCE, and England has been inhabited ever since. This marks the beginning of Middle Stone Age, or Mesolithic. Rising sea-levels cut Britain off from the continent for the last time around xxxx BCE. The population by this period were exclusively of our own species of the genus Homo, Homo sapiens sapiens, and the evidence would suggest that their societies were increasingly complex and they were manipulating their environment and their prey in new ways, possibly selective burning of the then omnipresent woodland to create clearings where the herds would gather to make them easier to hunt. Simple projectile weapons would have been the main tools of the hunt, such as the javelin and possibly the sling. The bow and arrow was also known in Western Europe from at least xxxx BCE. The climate continued to improve and it is likely the population was on the rise.[8]The New Stone Age, or Neolithic, begins with the introduction of farming, ultimately from the Middle East, around xxxx BCE. It is not known whether this was caused by a substantial folk movement or native adoption of foreign practices, nor are these two models mutually exclusive. People began to cultivate crops and rear animals, and overall lead a more settled lifestyle. Monumental collective tombs were built to house the dead in the form of chambered cairns and long barrows, and towards the end of the period other kinds of monumental stone alignments begin to appear, such as Stonehenge, their cosmic alignments betraying a preoccupation with the sky and planets. Flint technology also developed, producing a number of highly artistic pieces as well as purely pragmatic. More extensive woodland clearance took place to make way for fields and pastures. The Sweet Track in the Somerset Levels is one of the oldest timber trackways discovered in Northern Europe and among the oldest roads in the world, dated by dendrochronology to the winter of xxxx?xxxx BCE; it too is thought to have been a primarily religious structure.[7]The Bronze Age begins around xxxx BC with the first appearance of bronze objects in the archaeological record. This coincides with the appearance of the characteristic Beaker culture; again it is unknown whether this was brought about primarily by folk movement or by cultural assimilation, and again it may be a mixture of both. The Bronze Age sees a shift of emphasis from the communal to the individual, and the rise to prominence of increasingly powerful elites, whose power was enshrined in the control of the flow of precious resources, to manipulate tin and copper into high-status bronze objects such as swords and axes, and their prowess as hunters and warriors. Settlement became increasingly permanent and intensive. Towards the end of the period, numerous examples of extremely fine metalwork begin to be found deposited in rivers, presumably for ritual reasons and perhaps reflecting a progressive shift of emphasis away from the sky and back to the earth, as a rising population increasingly put the land under greater pressure. England largely also becomes in this period bound up with the Atlantic trade system, which created something of a cultural continuum over a large part of Western Europe.[9] It is possible that the Celtic languages developed or spread to England as part of this system; by the end of the Iron Age at the very least there is ample evidence that they were spoken across the whole of England, as well as the Western parts of Britain.[10]The Iron Age is conventionally said to begin around 800 BC. The Atlantic system had by this time effectively collapsed, although England maintained contacts across the Channel with France, as the Hallstatt culture became widespread across the country. The overall picture of continuity suggests this was not accompanied by any substantial movement of population; crucially, only a single Hallstatt burial is known from Britain, and even here the evidence is inconclusive. On the whole burials largely disappear across England, the dead being disposed of in a way which is archaeologically invisible: excarnation is a widely cited possibility. Hillforts were known since the Late Bronze Age, but a huge number were constructed in the period 600?400 BC, particularly in the South; after about 400 however new ones largely cease to be built and a large number cease to be regularly inhabited, while a smaller number of others become more and more intensively occupied, suggesting a degree of regional centralisation. It is around this time that the earliest mentions of Britain begin to appear in the annals of history. The first historical mention of the region is from the Massaliote Periplus, a sailing manual for merchants thought to date to the 6th century BC, and Pytheas of Massilia wrote of his exploratory voyage to the island around 325 BC. Both of these texts are now lost; although quoted by later writers, not enough survives to inform the archaeological interpretation to any significant degree.Contact with the continent was generally at a lower point than in the Bronze Age, although it was not insignificant. Continental goods continued to make their way into England throughout the period, although with a possible hiatus from around 350?150 BC. Numerous armed invasions of hordes of migrating Celts are no longer considered to be realistic, although there are two known invasions. Around 300 BC, it appears that a group from the Gaulish Parisii tribe took over East Yorkshire, establishing the highly distinctive Arras culture; and from around 150?100 BC, groups of Belgae began to control significant parts of the South. These invasions would have constituted movements of a relatively small number of people who established themselves as a warrior elite at the top of pre-existing native systems, rather than any kind of total wipeout. The Belgic invasion was on a much larger scale than the Parisian settlement, however the continuity of pottery style demonstrates clearly that the native population basically remained in place under new rulers. All the same, it was accompanied by significant socio-economic change. Proto-urban, or even urban settlements, known as oppida, begin to eclipse the old hillforts, and an elite whose position is based on battle-prowess and the ability to manipulate resources re-appears much more distinctly.[11]In 55 and 54 BC, Julius Caesar, as part of his campaigns in Gaul, invaded Britain and claimed to have scored a number of victories, but he never penetrated further than Hertfordshire and was unable to establish a province. However, his invasions do mark a turning-point in British history. Control of trade, the flow of resources and prestige goods, became ever more important to the elites of Southern Britain; as the provider of relatively limitless wealth and patronage, Rome steadily became the biggest player in all their dealings. In such a system, with retrospect it is clear that a full-scale invasion and ultimate annexation was inevitable.[12]Tacitus in his Agricola wrote that the various groupings of Britons shared physical characteristics with their continental neighbours: the Britons of England were more typically blonde-haired, like the Gauls, in contrast to the Britons of Wales, who were generally dark and curly of hair, like the Spanish, or those of Scotland, stereotypically redheaded.[13] This is a gross oversimplification which nonetheless holds fairly true to the present day. Indeed, numerous archaeologists and geneticists now dismiss the long-held assumption that the invading Anglo-Saxons wiped out the native Britons in England when they invaded, pointing instead to the possibility of a more limited folk movement bringing a new language and culture which the natives gradually assimilated.[9]Debate however is ongoing surrounding the ultimate origins of the people of the British Isles. In xxxx and xxxx respectively, Bryan Sykes and Stephen Oppenheimer both championed the idea of continuity ever since the Mesolithic period, with a substantial input from the East during the Neolithic.[14][15] More recently this view has been contested, by pointing out that the haplotypes which Sykes and Oppenheimer associated with Spain hailed ultimately from Asia Minor. This might be more consistent with some kind of Neolithic wipeout, however it is impossible to date this gene flow.[16] Other theories have proposed an even more substantial input in the Early Bronze Age than was previously thought. Ultimately, the genetics have in fact not yet told us anything new; all these theories were well in place amongst archaeologists long before attempts were made to identify historical population movements with genetics. There is still no consensus; what does seem to be in agreement however is that the bulk of England's contemporary native population was already in place by the beginning of written history in this part of the world.After Caesar's expeditions, the Romans began their real attempt to conquer Britain in 43 AD, at the behest of the Emperor Claudius. They landed in Kent, and defeated two armies led by the kings of the Catuvellauni tribe, Caratacus and Togodumnus, in battles at the Medway and the Thames. Togodumnus was killed, and Caratacus fled to Wales. The Roman force, led by Aulus Plautius, then halted as Plautius sent for Claudius to come and finish the campaign. When Claudius arrived he led the final march on the Catuvellauni capital at Camulodunum, before returning to Rome again for his triumph. The Catuvellauni at this time held sway over the most of the southeastern corner of England; eleven local rulers surrendered, a number of client kingdoms were established, and the rest became a Roman province with Camulodunum as its capital.Over the next four years, the territory was consolidated and the future emperor Vespasian led a campaign into the Southwest where he subjugated two more tribes. By 54 AD the border had been pushed back to the Severn and the Trent, and campaigns were underway to subjugate Northern England and Wales. In 60 AD however, under the leadership of the warrior-queen Boudicca, the tribes rose in revolt against the Romans. Camulodunum was burned to the ground, as well as Londinium and Verulamium, there is some archaeological evidence that the same happened at Winchester as well, and the Second Legion Augusta, stationed at Exeter, refused to move for fear of revolt among the locals there as well. The governor however, Suetonius Paulinus, marched back from his campaign in Wales to face Boudicca in battle. There was a substantial engagement, somewhere along the line of Watling Street, at the end of which Boudicca was utterly defeated. The province was pacified once more.Over the next twenty years the borders expanded but little, but the governorship of Agricola saw the last pockets of independence in Wales and Northern England finally incorporated into the province. He also led a campaign into Scotland, but from these conquests he was recalled by the Emperor Domitian, and the border gradually solidified along the line of the Stanegate in Northern England. Hadrian's Wall was built along this line in 138 AD; apart from a number of temporary forays into Scotland, this was now the border. The Romans, and their culture, were here to stay; over the course of their three hundred and fifty years in charge, England's landscape would become ubiquitously impregnated with traces of their presence.In the wake of the breakdown of Roman rule in Britain from the middle of the fourth century, present day England was progressively settled by Germanic groups. Collectively known as the "Anglo-Saxons", these were Angles and Saxons from, what is now, the Danish/German border area and Jutes from the Jutland peninsula. The entire region was referred to as, "Hwicce" and settlements throughout the south were called Gewisse. The Battle of Deorham was a critical battle that established the Anglo-Saxon rule in 577.[citation needed][18][19] Saxon mercenaries had been present in Britain since before the late Roman period, but the main influx of population is thought to have taken place after the fifth century. The precise nature of these invasions has not been fully determined, with doubts being cast on the legitimacy of historical accounts due to a lack of archaeological finds. Gildas Sapiens? De Excidio et Conquestu Britanniae, composed in the 6th century, states that when the Roman army departed the Isle of Britannia in the 4th century CE, the indigenous Britons were invaded by Picts, their neighbours to the north (now Scotland) and the Scots (now Ireland). The Britons then invited the Saxons into the island, hoping to repel the invading armies of the north. To their dismay, the Saxons themselves turned against the Britons after vanquishing the Scots and PictsSeven Kingdoms are traditionally identified as being established by these Saxon migrants. Three were clustered in the South east: Sussex, Kent and Essex. The Midlands were dominated by the kingdoms of Mercia and East Anglia. The Monarchs of Mercia's lineage was determined to reach as far back as the early 500's. To the north was Northumbria which unified two earlier kingdoms, Bernicia and Deira. The development of these kingdoms led to the eventual domination by Northumbria and Mercia in the 7th century, Mercia in the 8th century and then Wessex in the 9th century. Northumbria extended its control north into Scotland and west into Wales. It also subdued Mercia whose first powerful King, Penda, was killed by Oswy in 655. Northumbria's power began to wane after 685 with the defeat and death of its king Aegfrith at the hands of the Picts. Mercian power reached its peak under the rule of Offa, who from 785 had influence over most of Anglo-Saxon England. From Offa's death in 796 the supremacy of Wessex was established under Egbert who extended his control west into Cornwall before defeating the Mercians at the Battle of Ellendun in 825. Four years later he received submission and tribute from the Northumbrian king, Eanred.[20]The sequence of events of the fifth and sixth centuries is particularly difficult to access, peppered with a mixture of mythology, such as the characters of Hengist and Horsa, and legend, such as St Germanus's so-called "Alleluia Victory" against the Heathens, and half-remembered history, such as the exploits of Ambrosius Aurelianus and King Arthur. However, the belief that the Saxons simply wiped or drove out all the native Britons from England has been widely discredited by a number of archaeologists since the xxxxs, and the likelihood of that model being severely questioned. At any rate, the Anglo-Saxons, including Saxonified Britons, progressively spread into England, by a combination of military conquest and cultural assimilation, until by the eighth century some kind of England really had emerged.[21][22]Throughout the 7th and 8th century power fluctuated between the larger kingdoms. Bede records Aethelbert of Kent as being dominant at the close of the 6th century, but power seems to have shifted northwards to the kingdom of Northumbria, which was formed from the amalgamation of Bernicia and Deira. Edwin of Northumbria probably held dominance over much of Britain, though Bede's Northumbrian bias should be kept in mind. Succession crises meant Northumbrian hegemony was not constant, and Mercia remained a very powerful kingdom, especially under Penda. Two defeats essentially ended Northumbrian dominance: the Battle of the Trent in 679 against Mercia, and Nechtanesmere in 685 against the Picts.The first recorded Viking attack in Britain was in 793 at Lindisfarne monastery as given by the Anglo-Saxon Chronicle. However, by then the Vikings were almost certainly well-established in Orkney and Shetland, and many other non-recorded raids probably occurred before this. Records do show the first Viking attack on Iona taking place in 794. The arrival of the Vikings (in particular the Danish Great Heathen Army) upset the political and social geography of Britain and Ireland. In 867 Northumbria fell to the Danes; East Anglia fell in 869. Though Wessex managed to contain the Vikings by defeating them at Ashdown in 871, a second invading army landed, leaving the Saxons on a defensive footing. At much the same time, Æthelred, king of Wessex died and was succeed by his younger brother Alfred. Alfred was immediately confronted with the task of defending Wessex against the Danes. He spent the first five years of his reign paying the invaders off. In 878, Alfred's forces were overwhelmed at Chippenham in a surprise attack.It was only now, with the independence of Wessex hanging by a thread, that Alfred emerged as a great king. In May 878 he led a force that defeated the Danes at Edington. The victory was so complete that the Danish leader, Guthrum, was forced to accept Christian baptism and withdraw from Mercia. Alfred then set about strengthening the defences of Wessex, building a new navy?60 vessels strong. Alfred's success bought Wessex and Mercia years of peace and sparked economic recovery in previously ravaged areas.[23]Alfred's success was sustained by his son Edward, whose decisive victories over the Danes in East Anglia in 910 and 911 were followed by a crushing victory at Tempsford in 917. These military gains allowed Edward to fully incorporate Mercia into his kingdom and add East Anglia to his conquests. Edward then set about reinforcing his northern borders against the Danish kingdom of Northumbria. Edward's rapid conquest of the English kingdoms meant Wessex received homage from those that remained, including Gwynedd in Wales and Scotland. His dominance was reinforced by his son Æthelstan, who extended the borders of Wessex northward, in 917 conquering the Kingdom of York and leading a land and naval invasion of Scotland. These conquests led to his adopting the title 'King of the English' for the first time.The dominance and independence of England was maintained by the kings that followed. It was not until 978 and the accession of Æthelred the Unready that the Danish threat resurfaced. Two powerful Danish kings (Harold Bluetooth and later Sweyn, his son) both launched devastating invasions of England. Anglo-Saxon forces were resoundingly defeated at Maldon in 991. More Danish attacks followed, and their victories were frequent. Æthelred's control over his nobles began to falter, and he grew increasingly desperate. His solution was to pay the Danes off: for almost 20 years he paid increasingly large sums to the Danish nobles in an attempt to keep them from English coasts. Known as Danegelds, these payments slowly crippled the English economy and eventually became too expensive.Æthelred then made an alliance with Normandy in xxxx, through marriage to the Duke's daughter Emma, in the hope of strengthening England. He then made a great error: in xxxx he ordered the massacre of all the Danes in England, which had serious consequences. It angered Sweyn, who unleashed a decade of devastating attacks on England. Northern England, with its sizable Danish population, sided with Sweyn. By xxxx, London, Oxford, and Winchester had fallen to the Danes. Æthelred fled to Normandy and Sweyn seized the throne. Sweyn suddenly died in xxxx, and Æthelred returned to England, confronted Sweyn's successor, Cnut. However, in xxxx, Æthelred also suddenly died. Cnut swiftly defeated the remaining Saxons, killing Æthelred's son Edmund in the process. Cnut seized the throne, crowning himself King of England.[24]Alfred of Wessex died in 899 and was succeeded by his son Edward the Elder. Edward, and his brother-in-law Æthelred of (what was left of) Mercia, began a programme of expansion, building forts and towns on an Alfredian model. On Æthelred's death his wife (Edward's sister) Æthelflæd ruled as "Lady of the Mercians" and continued expansion. It seems Edward had his son Æthelstan brought up in the Mercian court, and on Edward's death Athelstan succeeded to the Mercian kingdom, and, after some uncertainty, Wessex.Æthelstan continued the expansion of his father and aunt and was the first king to achieve direct rulership of what we would now consider England. The titles attributed to him in charters and on coins suggest a still more widespread dominance. His expansion aroused ill-feeling among the other kingdoms of Britain, and he defeated a combined Scottish-Viking army at the Battle of Brunanburh. However, the unification of England was not a certainty. Under Æthelstan's successors Edmund and Eadred the English kings repeatedly lost and regained control of Northumbria. Nevertheless, Edgar, who ruled the same expanse as Athelstan, consolidated the kingdom, which remained united thereafter.There were renewed Scandinavian attacks on England at the end of the 10th century. Æthelred ruled a long reign but ultimately lost his kingdom to Sweyn of Denmark, though he recovered it following the latter's death. However, Æthelred's son Edmund II Ironside died shortly afterwards, allowing Canute, Sweyn's son, to become king of England. Under his rule the kingdom became the centre of government for an empire which also included Denmark and Norway.Canute was succeeded by his sons, but in xxxx the native dynasty was restored with the accession of Edward the Confessor. Edward's failure to produce an heir caused a furious conflict over the succession on his death in xxxx. His struggles for power against Godwin, Earl of Wessex, the claims of Canute's Scandinavian successors, and the ambitions of the Normans whom Edward introduced to English politics to bolster his own position caused each to vie for control Edward's reign.Harold Godwinson became king, in all likelihood appointed by Edward the Confessor on his deathbed and endorsed by the Witan. William of Normandy, Harald Hardråde (aided by Harold Godwin's estranged brother Tostig) and Sweyn II of Denmark all asserted claims to the throne. By far the strongest hereditary claim was that of Edgar the Ætheling, but his youth and apparent lack of powerful supporters caused him to be passed over, and he did not play a major part in the struggles of xxxx, though he was made king for a short time by the Witan after the death of Harold Godwinson.
• Location: Erie
• Post ID: xxxxxxxx erie
• Other ads by this user:
Journey & Steve Miller Band Tour xxxx Concert Tickets at Susquehanna Bank Center (June 22 Concert Tickets) buy, sell, trade: tickets for sale
Pittsburgh Penguins NHL Game Tickets xxxx xxxx Season Schedule Hockey  (Discount National Hockey League xxxx-14) buy, sell, trade: tickets for sale
Philadelphia Flyers NHL Game Tickets xxxx xxxx Season Schedule Hockey  (Discount National Hockey League xxxx-14) buy, sell, trade: tickets for sale
Philadelphia Eagles xxxx xxxx Game Schedule & Discount Tickets Info - Home & Away  (Discount Football Tickets- All NFL Games) buy, sell, trade: tickets for sale
Pittsburgh Steelers xxxx xxxx Game Schedule & Discount Tickets Info - Home & Away  (Discount Football Tickets- All NFL Games) buy, sell, trade: tickets for sale
//
//]]>
Email this ad