This is simply the process of combining smaller tables to make larger tables. This can be used to address problems with performance or scalability.
This is usually caused by tables being stored in separate files on disks and when databases are being queried there is a requirement for each several files to be accessed according to the joins thus slowing the query down.
I would simply describe this as the process of making big tables into smaller tables.
The proper definition is that database normalization is a method of reorganising data within tables to reduce the level of dependency. This helps to isolate data so that insertion, deletions and updates in a field can be made into a single table. The relationships between the tables would then propagate this throughout the database.
The goal of normalization is to reduce the amount of data within a table and also to make the data within each table make more sense.
Atomicity – this basically means the transaction works or doesn’t and is sometimes called the “all or nothing” rule. I always liken it to an Atomic bomb because they either work or fail. A failed transaction would then enter a state of rollback.
Consistency – this means only valid data is written to the database. This means if the constraints, keys….. are violated the data would not be committed.
Isolation – this means that multiple transactions should not impact other transactions which are occurring simultaneously.
Durability – this means when a transaction is successful all the pending changes are applied to the database.