Testing solution to MERGE with sequence NEXTVAL

In my last blogpost (“Strange behavior in MERGE with sequence NEXTVAL”) I had a link to article written by Bob Jankovsky. In this article is suggest solving the issue with MERGE and nextvalue by returning the nextvalue from a function. In this blogpost I’ll do some tests and see how this behaves (regarding timings).

Continue Reading »

Strange behavior in MERGE with sequence NEXTVAL

Today we learned the following: “When using a sequence in a MERGE statement, the nextvalue is generated for all rows both updated and inserted (hence all rows in the source).”
Our initial reaction was that this must be a bug, but is it really?

Continue Reading »

Use default parameter/setting (row) if not set specific for entity

The other day I was asked about the following: “We have a table VARIABLE and Variable entity. A new requirement is to restrict availability of a limited number of these entities to chosen customers.” I replied: “The volumes?” And the response was: “Quite low – now around 70 rows, it might slightly exceed 100 in the future. No LOBs.”

The developer continued: “So far we have been doing such filtering by applying conditions on FK columns referencing COMPANY table. We want to do the same here. But there’s a small difference: a Variable can be either visible to all customers (no restrictions) or to 1 or more chosen customers. So, an intersection table is quite a natural choice. E.g. COMPANY_VARIABLE table with 2 columns only: FK_VARIABLE, FK_COMPANY. Only FK_VARIABLE would be not null. So, the FK referencing COMPANY table would be optional.

I’d like to avoid keeping many rows in the new table for a variable without visibility restrictions. For such variables it would be enough to have only 1 row with null in FK_COMPANY. Otherwise, it would be

  • too cumbersome,
  • error-prone (when adding a new company or a no-restriction variable, you’d always have to add many rows to the intersection table)
  • and taking DB storage unnecessarily.

In turn, if a variable were to be restricted, we would add a new (VARIABLE_ID, COMPANY_ID) row to the table.”

Continue Reading »

SQL Developer, Database Copy and TRUNCATING unwanted tables

I just experienced something scary with SQL Developer and the “Database Copy” utility. One table in production was truncated by a mistake. I have always been a little against using the SQL Developer functionality “Database Copy …” in production, but for this one task during deployment of a new release, this customer (including me) have always used this feature. The customer have two somewhat static tables in a meta database (in a test zone), where they store table and columns comments (don’t ask me why they are not using the COMMENT feature in the Oracle database). During a release the customer copy the data from these two tables into two production databases. By accident or some kind of bug SQL Developer started to truncate ALL tables in the schema.

Continue Reading »

Copy schema with data pump when using editions

We have just been through a relatively tricky situation, trying to copy our test schemas from our Oracle 12c database to the new Oracle 19c test database. We were trying to use data pump to copy the schemas. The copy was ordered from operation, and the schemas was exported and imported as ordered. Before importing  both editions and the schemas was created manually. The schemas was also granted privileges to use the newly created editions. So check … test database was ready to be used? But no. Things failed in test.

Continue Reading »

The NUMBER datatype without precision

Lately I have seen projects using the NUMBER datatype for table columns without setting precision and scale. When I see this I always question “Is this intentional or did they just happen to forget to set the values?”. This is the problem with using implicit configuration/settings. You don’t really know if it was a mistake or intentional. So I usually go for the explicit definitions (but of course … it depends).
Continue Reading »

Redundant NOT NULLs in DDL

I often see DDLs with NOT NULL definitions which is really redundant. When I see such DDLs I really question “do the writer really know what he or she is doing”. They might not hurt you application, but why set them if they are redundant? (If you know I reason please comment)

Example:

CREATE TABLE post (
   id NUMBER(19,0) GENERATED ALWAYS AS IDENTITY NOT NULL ,
   title VARCHAR2(255 CHAR),
   version NUMBER(10,0) NOT NULL,
   PRIMARY KEY (id));

In this case I can see two reasons why I would NOT define the ID column with a NOT NULL constraint.
Continue Reading »

Java and JDBC for Oracle DBAs – Part 6: Spring-boot, JPA and Hibernate

In the last article we looked at Spring-boot and accessed the database using the JdbcTemplate class.Today we’ll rewrite the same application but by using JPA and Hibernate instead. The Spring Initializr web page have changed a little since the last time, so the first part will be a little different.

Continue Reading »