Wednesday, April 18, 2007

Lab 10

Both Hartley and Shannon measure amounts of information in order to remove uncertainty. The difference between the two measures is that Shannon measures the average amount of information and accounts for the probablity of occurrence, while Hartley just measures the information outright from raw data without accounting for averages or probability. Hartley's method is more likely to be inconsistent or inaccurate because it does not account for the probability that something may never happen. Hartley measures information with a specific equation: H(A)=log2A, where A = the # of possibilities that can be made from the given information.

Friday, April 6, 2007

Lab 9

Using Excel is a great way to organize and present data. Excel is an efficient software tool that lets you display your data in tables, charts and graphs. It uses linear aggression to predict where a variable would be placed in a given circumstance. While it cannot accurately predict where the variable will be placed 100% of the time, it is still useful and gives a good estimate about where to expect the variable to be placed. Excel is also a good spreadsheet to use because it has many functions that let you manipulate data in several different ways. Inductive modeling is important and useful because it helps understand a large quantity of data. Inductive modeling helps people solve the unknown and predict future outcomes. Inductive modeling is similar to the Hertz model because both methods predict future outcomes, and then they study the actual outcomes. Inductive modeling is very useful in the real world because many jobs require the study of statistics. Excel is widely used in many different careers because it works efficiently.

Thursday, March 29, 2007

Wednesday, March 7, 2007

Lab 7 Continued




DeMorgan's law is true.

When both A and B are 1, is the only time there are two 0's.


Thursday, March 1, 2007

Lab 7


Circuits & Truth Tables
When both A and B are 0 or when they are both 1, the result is 1.
If A and B do not match, then the result is 0.

Thursday, February 22, 2007

Lab 6

Below is the procedure needed to convert the binary number 110010101 into the decimal number 405:

1 1 0 0 1 0 1 0 1

2^8 2^7 2^4 2^2 2^0

256 + 128 + 16 + 4 + 1 = 405

Below is how to convert the decimal number 529 into they binary number 100010001:

Decimal Quotient Remain Binary
529 264 1 1
264 132 0 01
132 66 0 001
66 33 0 0001
33 16 1 10001
16 8 0 010001
8 4 0 0010001
4 2 0 00010001
2 1 1 100010001

In a positional number system the value of each digit is determined by its position. For example we use n as the base for our number system. For any number in this system each digit to the left is equal to n times the position to the right; each each digit to the right is equal to the position to the left divided by n. The decimal system is an example of a positional number system with the base of 10.

Non-positional number systems' numbers can be represented by different symbols. The Roman number system is an example because each numeral in the system represent each level of magnitude by a different symbol.

Thursday, February 15, 2007

Unix/Global Swarming

Unix is an operating system that has many advantages including multitasking, which allows multiple programs to run at the same time. Unix allows multiple users to work at the same time as well because it shares processing time between each user. Unix is also safe; it prevents one program from accessing memory or storage space allocated to another program, and enables file protection, requiring users to have permission to perform certain functions. Unix allows the user to operate several commands for the operating system to carry out.

One of the commands is called the finger command; this command allows the user to find out information about any other user in the program. To execute the finger command, one has to type “finger” and then enter the person’s (whom you are trying to find information about) network ID. Unix automatically displays information including the person’s name, how that person logged in and the person has checked their mail.

Another Unix command is finding out today’s date. All a person has to type is “date” and enter. Unix promptly displays the day’s date and time with the correct time zone. Unix also allows you to command it to show a calendar for any date in the past or future. To order this command, you type “cal” and a number 1 through 12 for the particular month you want to display, and type the year you want as well. After the command Unix displays a calendar with the date that you commanded it to show. My favorite Unix command is the command that allows you to go back to your home directory. This is helpful when you have too much displayed and get lost in the system. To go back to your home directory, you simply type “cd” and Unix takes you back to your home directory.

Natural Born Cyborgs, Chapter 6: Global Swarming

This was a particularly interesting chapter as it discussed different ways to route and direct information, and then organizing it for easier ways to retrieve it. Clark uses an example of ants retrieving food for their colony, in which different ants start taking different routes to get the food; the ants keep track of which routes are the best to take and proceed to utilize the best routes. This process is called “positive feedback” which allows the colony to rapidly self-organize in order to exploit the best routes before gradually moving on – once the nearby food is exhausted – to the next closest source.

Clark then goes on to discuss a technique known as “collaborative filtering.” Collaborative thinking involves exploiting the basic principles of “swarm intelligence.” Swarm intelligence involves relatively dumb individual agents (such as the ants) creating beautiful, complex, and life-enhancing structures by following a few simple rules and by automatically pooling their knowledge courtesy of chemical traces and structural alteration laid down by their own activity.

Collaborative filtering notes, exploits very similar principles to those underlying pheromone-based self-organization. This relates to how Amazon.com markets CDs or books to consumers, by showing what other buyers who bought the same product as you bought as well. Each episode of buying a product lays down a trace, and after sufficient amount of consumer activity, exploitable patterns emerge. The simple tactic of allowing consumer activity to lay down cumulative trails thus supports a kind of automatic pooling of knowledge and expertise. One reason this type of procedure is important is because it allows patterns of consumer actions to speak for themselves and to lay down tracks and trails in consumer space as a by-product of the primary activity, which is online shopping. This is different than “categorization” which is uses cumulative trail laying that is unplanned, emergent and as flexible as consumer choice itself.

Clark then discusses searching over the internet. He says that the general idea of strengthening and weakening connections and trails as an automatic result of ongoing patterns of use may one day turn the world wide web itself into a kind of swarm intelligence. He then comments on the self-organizing web server called Principia Cybernetica Web. The key feature of the server is its ability to create, enhance and disable links between pages as an automatic result of use. More popular links become increasingly prominently displayed, instigating a positive feedback process while little used links dwindle away. In the future there may be individual human-machine mergers, which would have servers that would do all of this on something as a user-by-user basis.

Clark talks about how old search engines were problematic in that they were inefficient. Old search engines such as Yahoo and Infoseek would return too much irrelevant information and overlook the important information that was needed. These were text-based searches. Hyperlink pattern-based search is a potent tool for accessing and deploying the knowledge bases that we are collectively creating. The crucial determinant of the social and psychological impact of new knowledge-based technologies will be the ease, speed and accuracy of access. Google is at the forefront of this movement with its hypertext pattern searches. It is now simpler and quicker to enter a modestly well-chosen search string than to hunt through a huge file of pre-stored bookmarks. Powerful search routines allow a lot of relevant materials to be recruited, grouped, and organized on-the-hoof into a kind of “soft-assembled” information package. Soft assembly is a useful concept which utilizes multi-component systems that can sometimes self-organize in order to exploit a useful subset of elements or resources, creating a temporary stable structure that solves some adaptive problem.

Clark then reference Luis Rocha about Distributed Information Systems, which are “collections of electronic networked resources in some kind of interaction with communities of users. The internet, the web, corporate intranets and databases are all examples of these systems. The more these systems can be set up to be self-organizing, changing, and evolving in automatic response to changing patterns in user activity, the closer we come, to a kind of collective human-machine symbiosis. “Fixed semantics” have no way to amend and update their own search and indexing as a community evolves new terms, ceases to use older ones, and begins to link together unrelated areas of study. An open-ended human-machine symbiosis facilitates the rapid dissemination of relevant information and the discovery of new knowledge. Bundling information into preset, pretagged physical packages may thus become less crucial, as users learn to soft assemble resources pretty much at will, tailored to their own specific needs.

Online articles or journals can be posted by anyone and are not always approved as a credible source. O’Donnell proposes the separation of the idea of validation from the idea of prepackaging. In the electronic world, major journals might add a kind of seal of approval to certain articles. The point for now is to flag the increasingly viability of on-the-spot soft assembly as a means of accessing and grouping information and resources. Interactions between primary and secondary materials will also mutate, as hyperlinked assemblies allow scholars to move directly between different translations, editions, experiments, critiques, and so forth.

Clark then discusses the tool “StarLogo”. StarLogo is an educational software packaged meant to encourage better thinking about decentralized systems. It offers whole hordes of mini-agents, each one able to sense environment, to respond according to programmable rules, and to alter the environment as it does so. This will help human minds grapple ever more successfully with the kind of decentralized complexity that characterizes so many critical systems, from highways to ant colonies to the world wide web to human minds themselves. StarLogo is however, only operates with simple rules. Our biotechnological self-image, however, depicts the human individual as a swarm-like ecology with multiple heterogeneous parts.

It is important that as we accept and embrace new technologies that we do not forget or sacrifice old technologies as well, (i.e. books.) As new technologies emerge we will have to accommodate for them as well including privacy and regulation issues. I found this article to be very interesting.