Hibernate Schema Generation and Validation — What ddl-auto Actually Does in Production
by Eric Hanson, Backend Developer at Clean Systems Consulting
What ddl-auto controls
spring.jpa.hibernate.ddl-auto tells Hibernate what to do with the database schema when the application starts. Five values, with meaningfully different behavior:
none — Hibernate does nothing to the schema. No validation, no creation, no modification. The database must already match the entity model exactly; if it doesn't, runtime errors occur when queries run.
validate — Hibernate validates that the existing schema matches the entity model. If a column expected by an entity doesn't exist, or a column type doesn't match, Hibernate throws SchemaManagementException and the application fails to start. No schema modifications.
update — Hibernate compares the existing schema to the entity model and adds missing tables and columns. It does not remove tables or columns. Safe for additive changes; silent failure for destructive changes.
create — drops and recreates all tables on every startup. All data is destroyed.
create-drop — drops and recreates tables on startup; drops all tables on application shutdown. Used by integration tests.
The dangerous defaults
Spring Boot's default ddl-auto value depends on the datasource type. For embedded databases (H2, HSQL, Derby), the default is create-drop. For all other databases, the default is none.
The gap between these defaults causes a common failure: developers work with H2 in tests (where schema auto-creation works), then deploy to PostgreSQL and encounter none — which silently accepts a schema mismatch and fails at runtime with ERROR: column "xyz" does not exist.
Always set ddl-auto explicitly, regardless of the datasource type:
# Development
spring:
jpa:
hibernate:
ddl-auto: validate # or update if you want additive auto-migration
# Production
spring:
jpa:
hibernate:
ddl-auto: validate
Why update is wrong for production
update sounds safe — it only adds, never removes. The problems:
Column renames are not updates. If you rename user_id to owner_id in your entity, update adds an owner_id column and leaves user_id in place. Queries that reference the entity work (they use owner_id now), but the old column wastes space and may contain the old data while the new column is empty. The rename was not performed.
No rollback path. If update adds a column and then a later column addition fails partway through startup, the schema is in a partially-modified state. There's no mechanism to roll back to the previous state.
Production schema drift. Over time, update leaves behind unused columns, incorrect constraints, and schema state that differs across environments. What's in the database is determined by the sequence of entity model states the application was deployed with — not by an explicit migration history.
Type changes are not updates. Changing a column from VARCHAR(255) to TEXT or from INTEGER to BIGINT is not applied by update. The existing column type is preserved; the entity mapping diverges from the schema.
validate as the production setting
validate is the correct ddl-auto for production. At startup, Hibernate verifies that:
- Every table mapped to an entity exists
- Every column mapped to an entity field exists
- Column types are compatible with entity field types
If validation fails, the application refuses to start with a clear error message:
HibernateException: Schema-validation: missing table [audit_entries]
HibernateException: Schema-validation: wrong column type encountered in column [total]
in table [orders]; found [numeric (Types#NUMERIC)], but expecting [bigint (Types#BIGINT)]
The application fails fast at startup rather than at runtime when a query fails. This is the behavior you want in production: a schema mismatch should prevent startup, not cause mysterious failures on the first request that touches the mismatched table.
validate does not protect against application-level logic bugs with the schema — a missing index, a constraint with incorrect semantics, a default value that doesn't match the entity default. It validates structural compatibility, not semantic correctness.
Flyway and Liquibase — versioned schema management
validate requires that someone else manages the schema. That's the migration tool's job. Flyway and Liquibase both version database migrations in explicit, ordered scripts that are applied in sequence.
Flyway with Spring Boot:
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
spring:
flyway:
enabled: true
locations: classpath:db/migration
baseline-on-migrate: false # true only for existing databases without migration history
Migrations live in src/main/resources/db/migration/ as SQL files following the naming convention V{version}__{description}.sql:
V1__create_orders_table.sql
V2__add_customer_email_to_orders.sql
V3__create_audit_entries_table.sql
Flyway applies unapplied migrations in version order at startup, before Hibernate initializes. Spring Boot's auto-configuration ensures Flyway runs before the EntityManagerFactory is created — validate sees the post-migration schema.
The Flyway + validate combination. Both run at startup:
- Flyway checks the
flyway_schema_historytable, applies pending migrations - Hibernate's
validateverifies the schema matches entity mappings
If they're in sync (migrations reflect the entity model), startup succeeds. If a migration was applied but the entity mapping wasn't updated (or vice versa), validate catches the mismatch.
Flyway migration for an existing column rename:
-- V5__rename_user_id_to_owner_id.sql
ALTER TABLE orders RENAME COLUMN user_id TO owner_id;
The entity is updated to ownerUserId mapped to owner_id:
@Column(name = "owner_id")
private Long ownerUserId;
This is the contrast with ddl-auto: update — the rename is explicit, versioned, and applied in the correct order. The migration history shows exactly when and why the column was renamed.
Schema validation failure modes and how to diagnose them
Startup failures from validate produce one of a few error patterns:
Missing table:
HibernateException: Schema-validation: missing table [payment_methods]
The entity PaymentMethod exists but the payment_methods table doesn't. Either the migration creating the table hasn't been applied, or the table name in the entity mapping doesn't match the actual table name. Check @Table(name = "...") and verify the Flyway migration status.
Missing column:
HibernateException: Schema-validation: missing column [payment_reference] in table [orders]
The migration adding payment_reference hasn't run, or the column was added under a different name. Check Flyway's flyway_schema_history table.
Wrong column type:
HibernateException: Schema-validation: wrong column type encountered in column [amount]
in table [orders]; found [numeric (Types#NUMERIC)], but expecting [bigint (Types#BIGINT)]
The entity maps amount to a Long (which Hibernate expects as BIGINT), but the column is NUMERIC. Either the migration used the wrong type or the entity mapping needs a @Column(columnDefinition = "NUMERIC") to match.
Diagnosing schema validation failures: compare the entity mapping against the actual schema. psql -c "\d orders" or SHOW CREATE TABLE orders (MySQL) shows the actual schema. The entity mapping shows what Hibernate expects.
Generating migration scripts from entity changes
Hibernate can generate the DDL it expects — useful for creating migration scripts when you change entity mappings:
spring:
jpa:
properties:
javax:
persistence:
schema-generation:
scripts:
action: create
create-target: create-schema.sql
At startup, Hibernate writes the DDL for all entities to create-schema.sql. This is not a migration — it's the full schema creation script. Diff it against your current schema to identify what Flyway migration you need to write.
Tools like Hibernate-diff or the liquibase:diff Maven goal automate this comparison: connect to a reference database and the current database, generate the difference as a migration script.
The profile-specific configuration pattern
Different environments need different settings:
# application.yml — default, applies everywhere
spring:
jpa:
hibernate:
ddl-auto: validate
---
# application-dev.yml — development profile
spring:
jpa:
hibernate:
ddl-auto: update # additive auto-migration acceptable in dev
flyway:
enabled: false # flyway optional in dev if using update
---
# application-test.yml — test profile
spring:
jpa:
hibernate:
ddl-auto: create-drop # or validate with a test migration path
flyway:
enabled: true
locations: classpath:db/migration,classpath:db/testdata # includes test data seeds
Production uses validate with Flyway. Development uses update or validate with Flyway depending on team preference. Tests use create-drop for isolation or validate with a clean test migration applied before each test.
The principle: the default (no profile active) should be the production-safe setting. More permissive settings are layered on top for development and test.
What validate doesn't catch
validate verifies structural compatibility — column existence and approximate type compatibility. It doesn't verify:
- Missing indexes —
validatedoesn't check for required indexes. A query that relies on an index that was never created performs a sequential scan silently. - Constraint correctness — a unique constraint on the wrong column, a foreign key missing, a check constraint with wrong semantics.
- Default values — a column with
DEFAULT NOW()in the database but no default in the entity, or vice versa. - Column length mismatches —
VARCHAR(50)in the database vs aStringfield in the entity (which Hibernate maps toVARCHAR(255)by default). The column exists and is compatible, but inserting a 100-character value will fail at runtime.
For comprehensive schema validation, validate is necessary but not sufficient. Schema migration testing — applying the migrations against an empty database and verifying the result — catches structural problems. Integration tests that exercise all entity operations catch the functional problems validate misses.