Vinnaren i pepparkakshustävlingen!
2007-05-17, 14:01
  #1
Medlem
Hejsan!

Jag har en hemsida med strax över 2000 medlemmar.

Jag använder mig utav en MySQL databas. (MySQL 5).

Vilken databas motor skall man köra sin tabbeler med? MYISAM eller InodDB?

Fördelar nackdelar med dom?
Citera
2007-05-17, 17:09
  #2
Medlem
C - Js avatar
http://mysqldatabaseadministration.b...reference.html

Sen är det ju så att i en databas så kan du använda olika "databasmotorer" för olika tabeller.
I en databas som vi använde tidigare innan jag konverterade den till sql server användes en tabell
som kördes med myisam för att kunna få en lokal räknare. (varje gång en rad modifierades
räknades ett värde upp automatiskt för den raden) Detta var enda tabellen som kördes med MyIsam,
och innan vi konverterade databasen till sql server så var det också den tabell som orsakade mest bekymmer
Aja så går det när man utnyttjar hack..

Förmodligen så kommer du för eller senare behöva göra transaktioner och dessa sköts bäst i InnoDb

Citat:
Ursprungligen postat av James Day på Wikipedia
I'd go with InnoDB until it's been proved that it's unsuitable. The first reason is reliability. Get a crash with MyISAM and you have the unreliable and slow, related to table size, table repair process. Same thing with InnoDB and you instead get the fixed time, fast and reliable log apply/rollback process. As the data set gets bigger, this matters more and more, as it does if you want to do things like sleep instead of being woken up in the middle of the night to fix a crashed table.

For reliability and performance, we use InnoDB for almost everything at Wikipedia - we just can't afford the downtime implied by MyISAM use and check table for 400GB of data when we get a crash.

The speed advantage of MyISAM is overstated IMO. LiveJournal with a mostly write environment saw a big increase in speed when it switched from MyISAM to InnoDB and they are very happy with InnoDB.

At Wikipedia we achieved some major performance gains by exploiting the way InnoDB clusters records by their primary key. Took some quite common queries from 50 seeks per result page to 1 or 2 seeks per page. Worse, the code at that time allowed people to go back 50,000 rows using LIMIT, causing 50,000 seeks and a DOS vulnerability. Big improvement to working set size from the change as well, so the results were more often in cache. It's so great an advantage that like the write caching it could be a dealbreaker for anything MySQL might do about alternatives to InnoDB.

With InnoDB, if you don't need repeatability, you might also switch to the least consistent transaction isolation level for a particular query, since that can reduce the locking work InnoDB needs to do. That is, assuming that you really do need only the MyISAM lack of guarantees.

On the other hand, a crazy bit of code did once use SELECT ... FOR UPDATE to scan every row of a table. Switching that table to MyISAM was a quick hack until it could be fixed. The table locking of MyISAM was way faster than watching InnoDB lock every row individually.

We used to use MyISAM for fulltext, duplicating the data in the InnoDB master table. Once the query rate grew sufficiently high and the data size grew past a gigabyte or so it became completely unacceptable on performance grounds, taking more than half of our database server capacity and still not working well. We abandoned it and switched to Lucene. By that point we were in the top thousand sites on the net, so it had survived pretty well.

I'm also as MySQL Support Engineer but these views are from my Wikipedia role, not the MySQL one.

James Day
Citera
2007-05-17, 19:55
  #3
Medlem
Tack för länken och citatet från Wikipedia?

Nu har jag lite mer information och vet lite bättre hur jag skall tänka när jag skapar mina databaser i framtiden.

Tack!
Citera

Stöd Flashback

Flashback finansieras genom donationer från våra medlemmar och besökare. Det är med hjälp av dig vi kan fortsätta erbjuda en fri samhällsdebatt. Tack för ditt stöd!

Stöd Flashback