[game_edu] Feedback request on this programming lecture snippit

Jim Parker jparker at ucalgary.ca
Fri Jun 10 13:56:57 EDT 2011


The kind of thing being discussed here is the subject of Informatics
degree programs, and knowing a little can be dangerous. In the worst
case, quicksort can be an O(n*n) algorithm (quick: when is that??). Game
developers hire people to know this stuff.

Also, I've taught my students for a long time that they must first make
the program work, then make it fast. A fast program that does not work
is never useful. So, when developing, use a sort that you have, or a
simple one to write and debug, whatever it is. If profiling shows that
it needs to be sped up then look at the data and create a sort that
works quickly in the circumstances found.

Jim



Answer:
(many quicksort implementations are O(n*n) when the data is already
sorted. Thus, if data is provided such that it is nearly sorted already,
don't use a naive quicksort)

On 6/10/2011 11:33 AM, Steve Graham wrote:

> One more comment about algorithm/data structure choice/macro optimization.

>

> Something I emphasize in classes (and practice) has always been to focus

> on simplicity and getting it correct first. Anyone who's been through

> data structure or an algorithms class *knows* that quicksort is much

> better than a simpler sort such as insertion or selection sort. BUT the

> problem is that it ain't necessarily so.

>

> I've had students do experiments in class to determine *when* quicksort

> became better than the other sorts -- results have varied a lot, but

> typically, it has not been until the size of the list to be sorted has

> gotten into several thousands of elements.

>

> Quicksort is more complex and easier to screw up than are the other

> sorts. So ... if your list of elements is on the order of tens or

> hundreds, then quicksort is likely NOT to be the best choice. The

> additional risk of errors more than offsets the negligible improvement

> in performance.

>

> This is often the case -- that the benefits of a more complex, but

> asymptotically better algorithm don't justify the complexity. This is

> often due to data size -- the data just isn't big enough!

>

> Another factor which is crucial is the nature of the data -- there can

> be big differences in performance of algorithms based on characteristics

> of the data. Is it almost sorted? Is it reverse sorted? is it ....

> Analysis is typically done making assumptions about uniform random

> distributions of data (because it makes the analysis easier or even

> possible, and because we may not have a clue what the data will really

> be like). If you have or can get some idea about the character of the

> data, it is a vital consideration as well.

>



More information about the game_edu mailing list