Tuesday, 20 September 2011

Experiences with Fuzzzy social bookmarking and ontology creation

Fuzzzy.com is a social bookmarking tool with a twist. It lets users add and share bookmarks but it also has a folktology (mix between folksonomy and ontology) where users can create tags that have meaningful relations between them. Some of the things you can do on fuzzzy.com is to create tags, create tags relations between tags. Vote on relations between tags and between bookmarks and tags. With this voting feature you over time evolve more relevant tag-bookmark and tag-tag relations.  

In this blog post I'll share some of my experiences with folktologies. Unfortunately I don't have an updated scientific study for the fuzzzy.com's folktology so you'll have to do with my personal observations. To learn more about fuzzzy.com and the folktology read my paper Metadata Creation in Socio-semantic Tagging Systems: Towards Holistic Knowledge Creation and Interchange.

Folksonomies, as we have come to know them on many social media sites, let users create any tag. This tag becomes available to everyone else. An Ontology on the other hand is a set of concepts within a domain, and the relationships between those concepts. Ontologies are the backbone of the Semantic Web so Fuzzzy was a project to see if the imprecise folksonomies could be replaced with more semantic folktologies. Some of the problems with folksonomies are synonyms, ambiguous tags, overly personalised and inexact, homonyms, plural and singular forms, conjugated words and compound words etc [1]. While folksonomies work just fine for a blog, it does not scale as well if semantics is important. As the folksonomy grows it grows into a large flat list of tags where a large percentage of tags just don't make any sense. To fix this problem they usually just show the tags that are used most frequently.

How the folktology of fuzzzy.com was used
A folktology seemed to be a smart use of Topic Maps. The folktology was seen as a way to make tagging more semantic but here's what happened:
  • Few users created relations between tags.
  • Few users assigned subject identifiers (needed to tell tags apart or if they represented the same thing) 
  • Few users voted on relations between tags.
  • Few users voted on relations between tags and bookmarks.
Users did however use tags as if they were folksonomies. Users created both tags and bookmarks and added tags to bookmarks. Tags did have better quality [3] as users could edit tags and make them more consistent.  Tags where more intuitive and made more sense when used on bookmarks. Common folksonomy problems such as synonyms, ambiguity, overly personalised, homonyms, plural and singular form tags etc were to a large degree weeded out.

I can only speculate on why the new way of using tags (Fuzzzy folktology) was not more adopted. Here's my personal hunches. I don't think there was a single reason but a mix:
  • People did already have a fixed view on how tagging should work and did not care to learn about the other tag features. 
  • People are in a hurry when tagging. They bookmark because thy don't have time to read the webpage. So its very important to be able to add bookmarks quickly and to assign tags just as quick.
  • Bookmarking is personal even if you use a social service. Every user has their own reasons for saving those particular bookmarks so he or she has no incentives to create an pretty shared ontology/tag network.
  • The rewards for creating tag to tag relations are perceived as low. Users don't see a direct value. With a built out folktology or semantic network, users can have more relevant bookmarks suggested to them. This is obviously a nice to have feature but it does not justify the work that goes into managing the folktology.   
  • The advanced tagging features becomes overkill and a source for cognitive overload for regular web users.
These interpretations are based on typical use of the system. Analyzing this further we begin to see contours of some general underlying issues.

Problems with a folktology
The social and semantic bookmarking service fuzzzy.com has a shared semantic tag set. This introduces several issues in comparison to plain and simple folksonomies. I summarize these issues as context and authority.
  • Who gets to update the individual tags? A tag is shared and reused by all. If a person decides to create a tag, say “vintage furniture” and another person want to use the same tag but would rather have a different description for the tag, or maybe he thinks the tag should have a different name all together. (Language and vocabularies naturally evolve over time). Then what? Who gets to decide? The meaning of a tag all depends on the context of the person. His situation and background will decide what is meaningful to him. No two persons has the exact same context.
  • For tags to be semantically interoperable it needs to have an addressable identifier (a PSI in Topic Maps jargon) and a particular semiotic meaning attached to the tag. An authority or other entity must make sure the Id and the meaning of the tags are fairly stable. In an open web environment this is not trivial. 
  • One persons definition of a term might be slightly different from the other person because they have different backgrounds. It would not be right to force them to use a definition that does not reflect how they see the world.
  • What if a tag name is to be changed because language is changing. If it is changed other users might not find what they are looking for. In this case one might present old tag names marked as deprecated etc but it still enforces a new world view onto users. In some cases (maybe on a corporate Intranet) this might be a good thing.
  • Who will garden and have the last say? What if a user want to delete a semantic tag he has created but this tag have been adopted and used by many other users?
Comparing folksonomies and folktologies are not straight forward. In general we can say that a folksonomy provide more serendipity and discovery and the folktology provides more semantics that can be used to provide more relevant or contextual aware content.

An ontology is a generic, commonly agreed upon specification of a conceptualization of a domain [2]. This definition is not compatible with open online social web 2.0 environments where people do not have a shared understanding. This and the above mentioned issues suggest that a folktology such as on fuzzzy.com is best suited for personal use or for coherent teams that have large amount of bookmarks.

To fix these folktology issues a solution could be to have personal topic maps which overlap with other users topics maps. For this to work with some degree of semanticity you could have subject identifiers for each node or deduce node similarity from looking at naming and nodes close by.


Monday, 5 September 2011

A Hebbian adaptive semantic triplestore

While reading the book; Making Things Work: Solving Complex Problems in a Complex World by Yaneer Bar-Yam and his chapter on networks and collective memory I got the idea to mix Hebbian theory with RDF. I have also played with similar thoughts before with Topic Maps technology in my research paper Quality, Relevance and Importance in Information Retrieval with Fuzzy Semantic Networks. This time my thoughts were more on adaptive knowledge.

So here's my idea for a Hebbian triplestore
Each triple in the triplestore database has an array of say up to 10 rows. Each array row has a datetime value. When a SPARQL query is processed the current date is added to the array as a new row. The array acts a FIFO queue so the first date is removed if the array is full. Whenever more triples are added to the database it checks if the database is full. If it is full it will delete triples that are least used. Whenever the database is not in use it will perform routine checks to find out what triples are least used. The database could then be regularly fed with new triples and it would over time automatically adapt to the domain where it is being used (queried). So the Hebbian adaptive semantic triplestore is a knowledge store that evolves and becomes more relevant in the environment where it is being used.

Another interesting feature of this triplestore is that it would know what triples are most used. Based on correlation of dates in arrays for triples it can suggest relevant or extended SPARQL queries.

Saturday, 18 June 2011

Sparql.us online RDF SPARQL query builder

Just made a simple site for testing SPARQL queries.
Using this site you don't have to install triple stores, RDF engines or other Semantic Web stack stuff before testing your SPARQL queries.

Check it out at http://sparql.us

The tool has restrictions on RDF graph size. It's only meant for learning SPARQL and testing/debugging SPARQL queries on the fly.

Saturday, 4 June 2011

Koios, the free open and collaborative problem solving platform

This year I have been prototyping a solution for solving complex social challenges online.

Many people don't get what it is I am developing. Most people who do get the idea think I'm mad to even try to do it. Some also think I am naive and wasting my time chasing a fairytale unicorn.
In this blog post I share my vision and some of my thoughts about the system and why the project is worth doing.

I have had the idea for this system for many years. I love watching documentaries. Documentaries often present social problems. Take for example social issues such as the obesity epidemic, ethics and abortion, poverty, racism, human trafficking, corruption, bullying. The list goes on and on and there are small issues on a personal level and there are the really big issues like climate change.
I have always liked to develop new innovative web solutions. Developing this kind of problem solving service seemed like the ultimate challenge. But, I always knew that it would be too big of a challenge to do on my spare time alone.

After searching for months on the web for similar projects, I could not find any sites that had tried to do this. There are, of course, thousands of sites that target various social problems. Some communities also have a set of web2.0 tools like a forum and a blog or wiki. On these sites you get to discuss the problem but there are no built in support for solving the problems in a systematic and analytic way. There is however some portals for social issues. www.worldchanging.com is a good example of this. Sites like this are a step closer but they mostly let people gather around the problem and share ideas. Again they are not really solving the problem just creating awareness and helping out with social networking, donations etc.

Last year, to my surprise, I got funding by the Norwegian government for doing this project so I set out to design the system. I had already spent a few years doing research on my spare time so I had a good starting point.

After a year of iterative design, prototyping, testing, reviewing and further research I finally have a mock-up that can be viewed online. This pre alpha demo version is put online so I can more easily show it to other remote people and get feedback. It also helps other interested parties to more easily find it so to get even more feedback.

Now to the interesting part. Why on earth do I think it can work and what makes it new?
When wikipedia started out not many people would envision that it would become the de facto reference source website and that it would replace traditional reference books.

My vision is to take the process of social problem solving to the masses. Anyone who wants to help solve a social issue should be able to just go to the website and be able to use proper tools and get help from others to solve the problem.

We recognize you don't really solve complex social issues. You just make the situation better or worse. By making variables/indicators go below an acceptable agreed upon level we can say that a problem is solved.

To achieve this vision I want to provide a clear process to follow so that users can follow an intuitive workflow with predefined steps. Users are not bound to follow the workflow step by step but users should go through all the steps to make sure all sides of the problem have been considered. It is up to the site to scaffold/guide the user through the process and help the user to describe models with feedback loops, do scenario planning, do stakeholder analysis and all the other activities that are required to solve a complex problem. In this way we can really “solve” problems on a mass scale.

There is also a competition/game aspect. This is important to attract users that are not domain experts, analysts or already highly motivated to solve a problem. Everyone gets points and badges as they contribute.
What makes it revolutionary is that it is intended for mass scale collaboration. I want to support expert problem solvers, analysts, domain experts and researchers but most importantly support thousands of common people collaborating in connected problem spaces. In this environment you have a few experts doing the supervision and coordination and hundreds of users carrying out tasks like finding facts, testing hypotheses. This is crowd sourcing and collective intelligence taken to the limits. The system is unique in that it lets people from all over the world collaborate intelligently guided by a sound analytic process. This is truly global Collective Intelligence.

Today many communities use a set of typical web2.0 tools to solve problems. Because there are no single portals or platforms that can support the whole range of features required to solve problems they often use a mix of tools. Here is a list of limitations of using a mix of tools like email, skype, forums, blogs, wikis, QnA sites, Google docs, or more customized platforms like Ning, Drupal, SharePoint or project management like tools such as Basecamp etc:
  • Data/conversation gets fragmented in different places and is difficult to find later on.
  • Important insights/data gets lost in offline conversations.
  • Different tools/portals are de-motivating for new potential contributors as they are presented with all kinds of designs/layouts/user interfaces in different locations. This mess is simply unattractive for new people.
  • There are no fixed slots to put certain data. Users are not sure how to organize insights, hypotheses, facts etc. using SharePoint or other tools there are no predefined process or structure so this have to be decided by each group.
  • Transparency is difficult when you don’t know where things are and the different systems have different user accounts.
  • There are no built in support for verifiability, confidence and reviewing. All this has to be done manually somehow.
  • There are no way to store all data in a structured way that makes it possible to holistically connect problem spaces to synthesize new knowledge based on the underlying data shared across all problem spaces.
  • No uniform way to share findings using open data formats.
  • The underlying data cannot be made accessible as web services/API’s and queried by remote systems to allow for Research2.0/science grid/Open linked data.
  • Limitations on the ability to find similar problems/solutions.
  • Limitations on the ability to find potential collaborators.
  • Does not make it easier for people who would like to collaborate on several problems with different groups.
So to summarize; I believe only experts or highly motivated coherent groups can use traditional tools effectively. Traditional tools also have big limitations when it comes to potential future benefits.

I hope I have been able to show the potential gains of such a system and that it is both important and very relevant to try to develop. Even if the project fails it will bring lots of new knowledge to the fields of web science, usability, information visualization, sensemaking, computer-supported collaborative argumentation, virtual learning communities, knowledge management and other fields related to web and system development.

You are welcome to try out Koios at http://koios.org

Do not hesitate to send me unfiltered feedback. Anything from conceptual, design, to feature requests are highly appreciated.
The current koios version (Alpha preview as of June 2011) is not a working tool but should give a good indication of what I am trying to develop. The site is continuously updated.

A few words to help Google out. This website is about Collaborative Web Centric problem solving, Soft Systems Modelling, crowd Sourcing, collective Intelligence, Information infrastructure and complex societal issues resolution, solving wicked problems, tackling social challenges, changing the world, making the world a better place.

Wednesday, 25 May 2011

Checklist before launching a website

If you are developing a small website for a hobby project or similar you might not have test scripts or a test plan. Here's a simple check list you can use for testing your site.

Design / Sign off
  • All relevant persons/stakeholders have been given the chance to comment on the design.
  • Relevant persons understand the concept/point of the site.
  • Someone other than you can use the site with ease.
  • Design has been tested on old projector with poor contrast. (There are tools to do this also)
  • All pages have valid title and meta tags.
  • Other typical SEO guidelines.
  • 404 page.
  • Internal Server Error page.
  • Content can be printed? (Print style sheet).
  • Meta tags.
  • Fav icon.
  • Semantic Html
  • Added analytics. Website visitor tracking statistics. E.g. Google analytics.
  • Site map.
  • Friendly URLs
  • Form validation.
  • Mobile support.
  • Share buttons: Twitter, facebook, LinkedIn etc.
  • Contact form works.
  • humans.txt
  • Yslow.
  • Stess test.
  • Pages checked for page size. Page footprint of more than 1Mb is bad.
  • Caching evaluate. (Page output caching, data caching etc, web server caching, E-tags etc)
  • Log been checked for errors.
  • Home page downloads within 10 seconds or less.
Stability / accessibility
  • Tested in all relevant browsers such as IE, Chrome, Firefox, Safari, Opera on different OS and with different versions?
  • Site tested without JS enabled in the browser.
  • Tested with various screen resolutions both small and large.
  • Site tested with W3C CSS validator.
  • Site tested with different screen resolutions and on iPad.
  • Site tested on mobile.
  • Site tested using accessibility toolbar etc.
  • Site tested with full JS error notification turned on.
  • Do you know your monthly bandwidth limit? Make sure your site resources are not to big and visited by to many people.
  • All assemblies checked for memory leaks.
  • Backup.
  • Robots file.
  • System been evaluated against security threats.
  • OWASP guidelines evaluated.
  • Need to set up e-mail or SMS alerts? (There are monitoring services you can buy)
  • ASafaaWeb security analyzer
  • Content placed consistently.
  • Tense/Style of writing consistent.
  • Dead links check.
  • No empty pages.
  • No Lorem Ipsum pages.
  • Pages spellchecked.
  • Content formatting consistent.
  • Alt text on images.
  • You have contact info, privacy, feedback, terms and copyright info.
  • Site tested with realistic content.
  • Site checked for missing resources (404 on images , js, etc.).
Content publishing related editor support
  • Nested lists.
  • Links.
  • Images in text.
  • Other text formatting.

Thursday, 10 March 2011

Why do software development projects for large customers take unproportionally longer time than for smaller customers

Occasionally, I meet other developers who tell me that they can't understand how we could have spent the amount of money and time that we did on a project. Often they will even say things like; "My small company could have developed this in half the time". So, why is it that solutions for large organizations takes so much more time to develop than a similar type of solution for a small company? 
Here's what I think; Time and resources are NOT a linear function of the number of features to be developed. And the reason for this? In one word I like to blame it on complexity.

Lets take a look at the definition of complexity: In most dictionaries you'll find that complexity tends to be used to characterize something being intricate and compounded. Ok, so to have an intricate and compounded project you have to have many things together. A complex project therefore must consist of many things. This seems logical. Large companies have more stuff, right? Stuff? yes more people, more projects, more sub systems, more interfacing systems, more history, more data, more legacy code, more policies, more legal requirements, more constraints, more processes, more methodologies, more guidelines, more internal and external stakeholders and so on. All these things together form a complex project. Changes to one part of the project will often have consequences for other parts. In the following sections I'll try to delve into what complexity consist of, bare with me:  

The uncertainty factor
Projects for large customers have more stuff. No one can have a full overview of all the stuff and all the details of all the stuff. Naturally this leads to situations where things are just unknown or uncertain. When the number of things we have to deal with gets over a certain size it gets hard for us manage them. Often more people are brought in to deal with the mess. Also a risk management [1] consultant might be hired to tackle the uncertainty by calculating risks. Computers are great at managing large amount of data. The human brain, on the other hand, tends to overlook, forget, make faulty assumptions, bias etc. This naturally leads to errors and wasted time. To fix the uncertainty factor we will usually try to seek information to make sense of the mess. This search for just-in-time knowledge will be overlapping. Several people will try to dig for the same information over time and the same information is prepared over and over for various purposes with slight adjustments and the information prepared are often not found again for others to reuse. And when the information is found, the reader might even misinterpret the information or apply it to a different situation where the knowledge don't apply. These are typical problems that the field of Knowledge Management has tried to solve for a couple of decades now.

Uncertainty also causes problems later in the project. Humans are very bad a comprehending complex systems so they are bound to under estimate. This under estimation leads to schedules being broken later in the process which again leads to less time to do the job right. So you end up with less quality which again later can cause big problems and lots of wasted time.   

The dependency factor
As we have seen complexity is being intricate and compounded. This implies there are relations between the stuff. Stuff are dependent on other stuff. An example of a not so complex project is a one man programming shop doing a small project for a small customer. He would have relatively few relations to other people and between the things he is working on.

We can use graph theory and network complexity as examples in this regard. When the number of nodes increases, the number of possible connections increases at a higher rate. This means that the people involved will spend more and more time on communication and dealing with more and more stuff.

According to triangular polygonal numbers using the formula n(n-1)/2; if you have 3 nodes the maximum number of relations between these nodes is 3. If you have 4 nodes the number of relations can be 6. For 8 nodes there are 28 connections and so on.

In many cases there is not a direct relation and communication needs to take several node hops which obviously take more time. This also increases the possibility of communication error so the project takes even longer time.

Are there really more stuff to deals with on projects for large customers?
Below is a listing of what I have experienced during my work as a freelancer, and as a consultant for everything from small companies to large multinational corporations.

  • Large companies usually have higher demands on quality assurance.
  • Large customers are more likely to have a number of parallel projects where internal resources have to be coordinated between projects. Often you will have to wait for some other project.
  • Large companies often have rigid processes with decision gates and formal project planning steps that just takes time. Decisions need to be approved on many levels.
  • You are probably more people and because many are dependent on you, you can not just sit wherever you like and do the project. Less work situation flexibility usually results in less effective work as you have to spend more time on travel instead. 
  • In large organizations there's usually more beurocracy, which means more waiting if you need to order something, if you need access rights etc etc. You might have internal rules preventing you from fixing your computer when it breaks etc.  
  • There are usually more meetings in large organizations as there are more people involved that wants to have their say. A two hour meeting with 10 persons equals 20 hours.  
  • You are likely to have more guidelines, compliance or regulations on your process so you need to spend more time reporting and writing documentation. 
  • You often can't choose the tools, methods, frameworks you like. You have to learn and reuse what is already in use. This makes your process less lean.
  • Working with large customers you are more likely to also work on several projects for the same customer. When working on several projects at once, a lot of time will be wasted because of all the  chaos that comes with doing parallel work.   
  • For large customers there's a bigger chance that there are specialists or experts to deal with certain parts of the project. When these persons quit or leave for another customer, project or even new job it takes time to replace them.
  • There are probably more people involved and there might be some misunderstandings when something is communicated verbally and new people who come on the project don't get the same piece of information.
  • For large customers there is a bigger chance that some of the new people on the project is or will get less motivated and slows down progress or create tension in the group. The new guys have less knowledge about the project and often feel they have less to contribute. Often the developers who have been on the team from the start will get an "I'll just fix it my self" mentality where he or she just solves the problem him self because he knows that it will take considerably longer time for the new person who don't know the system just as much. This again leads to the situation where the new person feels left out or less worth. 
  • For large companies there are usually more stakeholders and end users involved. The chance is bigger that relevant stakeholder and/or users do not get to give feedback or input to the development process. This often cause problems later and leads to longer development time and possibly even conflicts.
  • As a side effect of the project taking more time, you probably need more developers to deliver the project on time. Everyone can't be super programmers. The project often has to "pick from the bottom of the barrel" and so there will be novice developers that are less effective.
  • More people also mean there's greater chance that someone will become a problem. This can be because they don't fit into the group, because they simply didn't want to be on the project, because they had other expectations to what they were going to work on etc. 
  • More people needed often results in more staffing problems. Some people might feel that they are just thrown on the project.
  • For larger customers there might be collaboration problems or unhealthy competition between departments, suppliers, contractors, partners etc. 
  • The larger the organization, the more layers of management. The more layers of management, the harder it is for bad news to get to the top. No one wants to be the one to communicate the bad news.
  • When there's many departments involved you will often get into a situation where you have to find out how to please everyone. You don't want to have opposition to you project.    
  • You might even meet managers that have their own personal agenda, and you will have to deal with what comes along with that. 
  • It's more likely that you are required by IT politics to use large frameworks or platforms that make the development more difficult than it has to.
  • Large organizations usually have more requirements on branding.
  • For larger customers there's usually more infrastructure, more servers etc. This means more time goes into moving files here and there for testing etc.
  • The build and test environments are usually larger because of higher degrees of QA and Application Lifecycle Management requirements. Managing access to these environments takes more time.
  • There are usually more technologies involved and you can't know them all so you have to spend your work day google how to do this and that.
  • You are more likely to integrate your solution with some legacy system or third party product. This requires the team to learn new API's.
  • There's probably more systems and infrastructure involved in your work process so when one sub system is down the entire team might be impeded.  
  • With more server infrastructure comes more security management. 
  • For large customers there are usually more people involved so there's more waiting on getting in touch with other people. Maybe you need a confirmation on some issue so you wait for others to get around or wait for them to pick up the phone. Of course you don’t sit waiting doing nothing until you get the answer but this takes your concentration away and it makes your day more chaotic as you have a bunch of thing to do and follow up on.
  • There's going to be more coordination needed. Coordination between core team developers, other developers, between developers and designers, testers, managers, product owners etc. 
  • When there are more people, you are more often distracted. Maybe someone needs you to do something. This again takes your attention away and once attention is lost you go for coffee, start chatting or read the online news paper etc.
  • You probably have to work with external agencies or contractors. Either because they want to have the best people or because they are not able to find the required skills within either your or their own organization. So they outsource parts of the project. Then, you are left having to communicate with the other remote contractor. Teams not co-located have more communication and trust problems [2] and communication is of course much slower.   
  • Large organizations usually have more history and therefore more legacy code. Writing code is very fast when there is little code. As the code base gets bigger things slow down. Rewriting and refactoring takes more time. More time is needed to architect and engineer the code. 
  • It's more likely that code is shared between projects. When requirements change it is usually more difficult to make changes if the code is reused on several projects. If these projects are not co-located it can become a nightmare to coordinate changes on shared code.  
  • Because the organization has more history it will probably have more code. As the code base gets larger, compilation takes longer time. 
  • More code makes it more difficult for developers to get an overview, so new features or sub systems might not get implemented in the best way. Often this code will need an overhaul later when it has escalated into some kind of a problem. 
  • When the code base is large and there are many developers there is a tendency for developers to be afraid to change the code of other developers. Especially if there are no unit tests. Developers fear it might have consequences they don't know of or they are afraid other developers don't agree on the code change etc. They might even feel a pressure to just get the job done so they don't do any refactoring and just adds more code making it less readable. 
  • When there are more developers involved not everyone will have the same feeling of responsibility. Some developers will avoid doing to big changes to the code because they don't want to be the one messing up the system if something doesn't work as planned.
  • As the system gets larger and time goes by there's more places and chances to put in even more code. You get the requirement creep [3], where stakeholders push for more features.
  • Large organizations usually have more users so more work need to be done at scaling the system either through code or by adding more server infrastructure. 
  • Because there are more users you have to develop fall back solutions and cater for more type of users.
  • With more users you need to do more stress testing and performance tuning.

Time leads to more time
  • As the project drags out there will be more opportunities to make changes. Suddenly someone decides that the system should be based on a completely different platform because of some top management strategic decision. 
  • If the project goes on for a long time some of the project participants will probably get tired or bored. This naturally slows down the pace.

Projects for large customers take more time simply because the project will be more complex.

Tuesday, 1 March 2011

Why not use Flash/Silverlight on public websites

This blog post is about why not to use Flash/Silverlight on your website. You can find hundreds of people who have already written on this topic but I still meet people who think it is a good idea. There are a few cases where it makes sense to use these technologies but in most cases you, your customer(s) and your end users are better off, not using Flash or Silverlight.

Why not use Flash/Silverlight
  1. A page with Flash/Silverlight will load slower. Especially if the page also has a lot of javascript on it.
  2. Inconsistencies with web standards. A flash movie gives a different feeling than ordinary web content:
    • Print does not work the same.
    • Copying content might not work.
    • Navigation, including back button don't work as you are used to.
    • Enlarging text doesn't always work the same.
    • Input fields, buttons etc look different and can confuse end users.
    • Bookmarking or linking to your site is more difficult since you're probably not able to link to the exact section within the Flash/Silverlight movie.
    • Searching text on the page (ctrl+F) doesn't work.
    • Less usable for the disabled.
  3. Internationalization and localization is more complicated.
  4. SEO is more difficult and in most cases will be less good than a plain HTML solution.
  5. It is more difficult to maintain the code as the resulting .swf/.xap file is closed and require special skills other than regular web development skills.
  6. Flash design is usually hard coded. You can not easily reapply a new style with such as with CSS. Silverlight makes this easier with XAML but XAML again requires knowledge and special tools.
  7. It is usually more difficult to update Flash/Silverlight content. There's usually no wysiwyg support.
  8. Q.A. gets more difficult (automated testing and code review is more difficult because it requires other skills and tools).
  9. Analytics using tools such as Google Analytics is more difficult and usually less precise.
  10. In most cases you don't have control over if the end user has Flash/Silverlight installed. Even if the website is an Intranet and everyone is supposed to have the same setup there's usually someone with a non-standard client that will complain.  
  11. Flash will not run at all platforms. (e.g. Flash on iOS)
  12. Flash/Silverlight is more difficult when it comes to progressive rendering. Normal web pages load and render parts of the page before the entire page with all the page resources are sent over the wire. With Flash/SilverLight you might need to create pre-loaders or put extra work into the implementation to make it load smoothly.
  13. You might have to create fall back solutions for those who have not installed the Flash/Silverlight plugin.
  14. Flash/Silverlight usually requires more bandwidth.
  15. Some developers and designers will get tempted to put fancy features or overly fancy designs into the Flash/Silverlight and they often do. These features can often become a source of irritation as they waste the time of the user or make the user think. Often they will create fancy animations on their top notch Mac computers and forget to test it on slower computers. Heavy use of animations requires more CPU usage which again is less environmentally friendly.
  16. As a developer you can get problems with stacking order if you have layers that will popup over the Flash movie. The Flash will always position it self on top, regardless of z-index. might work. Often you don't get into this problem until long after the Flash is developed and you need to update the Flash to fix the problem.  
Now that the web is getting more ubiquitous with mobiles and more and more client types appear (like tablet computers) it gets more important to adhere to standards and reuse best practices.