The traditional wisdom for designing database schemas is to use a design tool (typically based on a UML or E-R model) to construct an initial data model for one's data. When one is satisfied with the result, the tool will automatically construct a collection of 3rd normal form relations for the model. Then applications are coded against this relational schema. When business circumstances change (as they do frequently) one should run the tool again to produce a new data model and a new resulting collection of tables. The new schema is populated from the old schema, and the applications are altered to work on the new schema, using relational views whenever possible to ease the migration. In this way, the database remains in 3rd normal form, which represents a 'good' schema, as defined by DBMS researchers. 'In the wild', schemas often change once a quarter or more often, and the traditional wisdom is to repeat the above exercise for each alteration. In this paper we report that the traditional wisdom appears to be rarely-to-never followed for large, multi-department applications. Instead DBAs appear to attempt to minimize application maintenance (and hence schema changes) instead of maximizing schema quality. This leads to schemas which quickly diverge from E-R or UML models and actual database semantics tend to drift farther and farther from 3rd normal form. We term this divergence of reality from 3rd normal form principles database decay. Obviously, this is a very undesirable state of affairs, and should be avoided if possible. The paper continues with tactics to slow down database decay. We argue that the traditional development methodology, that of coding applications in ODBC or JDBC, is at least partly to blame for decay. Hence, we propose an alternate methodology that should be more resilient to decay.