Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 5 hours 23 min ago

PLW-07204: conversion away from column type may result in sub-optimal query plan

Tue, 2024-05-07 19:26
I have been wondering for a long time why a select like "SELECT COUNT(*) INTO l_count FROM foo WHERE c_now >= ts + 1" in the following example issues a PLW-07204 warning but for example "SELECT COUNT(*) INTO l_count FROM foo WHERE ts < c_now - 1" does not. <code> CREATE TABLE foo (ts DATE, tx VARCHAR2(30)); INSERT INTO foo VALUES (SYSDATE - 1, 'foo'); INSERT INTO foo VALUES (SYSDATE - 2, 'bar'); COMMIT; ALTER SESSION SET PLSQL_WARNINGS = 'ENABLE:7204'; CREATE OR REPLACE FUNCTION new_foo RETURN NUMBER IS c_now CONSTANT DATE := SYSDATE; l_count NUMBER; BEGIN SELECT COUNT(*) INTO l_count FROM foo WHERE c_now >= ts + 1; -- SELECT COUNT(*) INTO l_count FROM foo WHERE ts < c_now - 1; RETURN l_count; END; / SELECT * FROM user_errors WHERE name = 'NEW_FOO'; </code>
Categories: DBA Blogs

SQL Loader free file format data load

Tue, 2024-05-07 19:26
We have many files available in our Linux mount point and wanted to load them into the table. The solution I'm seeking is to load all the files(with no-header, a different set of columns in each file, comma separated and new line char is available). <b>Sample file data:</b> files1.csv 1,"Test1" 2,"Test2" ----- files2.csv 1,123,"Case0" 2,456,"MyName" ----- files3.csv 1234234,"2024-01-01","foo" 5894234,"2024-02-01","foo3" I'm looking for a way to load these files in a single table as given below. Is there a way we can achieve this using SQL Loader? <b>Oracle Table: </b> Create table generic_files(COLUMN_1 VARCHAR2(4000), COLUMN_2 VARCHAR2(4000), COLUMN_3 VARCHAR2(4000), FILE_NAME VARCHAR2(4000), INSERT_DT DATE default SYSDATE) COLUMN_1. | COLUMN_2. | COLUMN_3 | FILE_NAME. | INSERT_DT (will have sysdate) 1 Test1 null files1.csv 2 Test2 null files1.csv 1 123 Case0 files2.csv 2 456 MyName files2.csv 1234234 2024-01-01 foo files3.csv 5894234 2024-02-01 foo3 files3.csv
Categories: DBA Blogs

parameters supplied to a @script.sql in SQLPlus

Tue, 2024-05-07 19:26
I have done a little searching, and not found anything that speaks to this, thought I would ask the experts: Recently I had to build an SQL script to be run in SQLPlus, and this script invoked another with the @ usage, and passed in a derived value as an argument (would be received as &1 in the 2nd-level code). I wondered if I was facing a misunderstanding with scope, and was hoping you could tell the world for sure. Test case (no tables or other types involved): 1st-level SQL <code> set pages 0 feed off head off verify off trims on variable roll_qtr1 VARCHAR2(6) exec :roll_qtr1 := to_char(sysdate,'yyyy')||'0'||to_char(sysdate,'q'); col filename1 new_val filename1 SELECT 'test_file_'||:roll_qtr1||'.dat' filename1 FROM dual; --spool &filename1 @ get_file_data.sql :roll_qtr1 --spool off </code> 2nd-level SQL (@ get_file_data.sql from above) <code> set pages 0 feed off head off verify on lines 9000 colsep ',' variable parm_qtr varchar2(6) exec :parm_qtr := '&1'; SELECT :parm_qtr FROM dual; </code> Now removing the single quotes off the &1 in the 2nd-level SQL gets the value I expect, whereas the code as it is gives: BEGIN :parm_qtr := <b>':roll_qtr1'</b>; END; * ERROR at line 1: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at line 1 <b>What gives?</b> It passed the actual bind variable name instead of the value? Which says to me that running an SQL file with "@" is very much the same as, say, an <i>include</i> command in a C program, or running a KSH file in UNIX using the "." notation for execution - all making whatever happens as if it's all in one process/session. Wouldn't this negate the value of command line parameters within the SQLPlus session? Is there a by-reference vs. by-value thing going on? I sure would value a chance to learn the right understanding.
Categories: DBA Blogs

Why is json_array_t using 0-based indexing

Tue, 2024-04-30 13:46
It took me 25 years to get used to Oracle using 1-based indexing in pretty much all API's. How the rather new json_array_t data structure used a 0-based indexing and drives me crazy. Is there any reason behind this "strange anomaly" or did someone just want to drive people crazy? The following example only returns 2 and 3 because it must be written "FOR i IN 0 .. c_json.get_size() - 1 LOOP ": <code> DECLARE c_json CONSTANT json_array_t := json_array_t('[1, 2, 3]'); BEGIN FOR i IN 1 .. c_json.get_size() LOOP dbms_output.put_line(c_json.get_number(i)); END LOOP; END; / </code>
Categories: DBA Blogs

Converting column number values into array number values in SQL

Tue, 2024-04-30 13:46
I have a table like below. <code>create table t2 ( id varchar2(1),val number) ; insert into t2 values ('a',1); insert into t2 values ('a',2); insert into t2 values ('a',3); insert into t2 values ('a',4); insert into t2 values ('b',1); insert into t2 values ('b',2); insert into t2 values ('b',3); insert into t2 values ('c',1); insert into t2 values ('c',2); insert into t2 values ('c',4); insert into t2 values ('d',1); insert into t2 values ('d',2);</code> we have to print o/p like below. <code>id x --- ------- a 1,2,3,4 b 1,2,3 c 1,2,4 d 1,2</code> this can achieve by below query <code>select id,LISTAGG(val, ',') WITHIN GROUP (ORDER BY val ) as x from t2 group by id</code> Here x column is character datatype .But i need to convert this to varray of number / nested table of number ( not varray/nestedtable of character ). i tried like below <code>CREATE TYPE varchar_TT AS TABLE OF varchar(10); with z as ( select id,varchar_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,varchar_TT('1,2') y from t2 group by id ) select id , x ,y from z ; o/p ---- id x y ---- ------------- --------------- a C##SIVA.<b>VARCHAR_TT('1,2,3,4')</b>C##SIVA.VARCHAR_TT('1,2') b C##SIVA.<b>VARCHAR_TT('1,2,3') </b>C##SIVA.VARCHAR_TT('1,2') c C##SIVA.<b>VARCHAR_TT('1,2,4') </b>C##SIVA.VARCHAR_TT('1,2') d C##SIVA.<b>VARCHAR_TT('1,2') </b>C##SIVA.VARCHAR_TT('1,2')</code> if i add below condition , i am not getting any result . <b>where y member of x ;</b> so i tried to convert to number array . <code>CREATE TYPE number_TT AS TABLE OF number; with z as ( select id,number_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,number_TT(1,2) y from t2 group by id ) select id , x ,y from z ; ORA-01722: invalid number 01722. 00000 - "invalid number" *Cause: The specified number was invalid. *Action: Specify a valid number.</code> 1 ) Here i need o/p like below to use <b><u>member</u></b> and <b><u>submultiset</u></b> conditions. <code>o/p ---- id x y ---- ------------- --------------- a C##SIVA.<b>NUMBER_TT(1,2,3,4) </b>C##SIVA.NUMBER_TT(1,2) b C##SIVA.<b>NUMBER_TT(1,2,3) </b>C##SIVA.NUMBER_TT(1,2) c C##SIVA.<b>NUMBER_TT(1,2,4) </b>C##SIVA.NUMBER_TT(1,2) d C##SIVA.<b>NUMBER_TT(1,2) </b>C##SIVA.NUMBER_TT(1,2) select varchar_tt('1,2') x ,number_TT(1,2) y from dual; x y -------------------- ---------------- C##SIVA.VARCHAR_TT('1,2') C##SIVA.NUMBER_TT(1,2)</code> Please let me know how to convert character array to number array . 2) <code>create table t4 ( id VARCHAR2(1) , val number_tt ) NESTED TABLE val STORE AS val_2 ;</code> How to insert into t4 table from t2 ? expected o/p query of t4 table should be like...
Categories: DBA Blogs

why view's trigger disappear?

Tue, 2024-04-30 13:46
Got information from your archives, BUT solution is not provided or there's no solution??? Archive : "Why the trigger disappears... May 28, 2003 Reviewer: Kamal Kishore from New Jersey, USA " Hi Tom, After you re-create the view definition using CREATE OR REPLACE (maybe to change its condition), the trigger on the view disappears. Is this expected behaviour? SQL> create or replace view emp_view 2 as 3 select * from emp 4 SQL> / View created. SQL> create or replace trigger trig_emp_view 2 instead of insert or update on emp_view 3 for each row 4 begin 5 Null ; 6 end ; 7 / Trigger created. SQL> show errors No errors. SQL> select ut.trigger_name, ut.table_owner, ut.table_name 2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW' 3 / TRIGGER_NAME TABLE_OWNER TABLE_NAME ------------------------------ ------------------------------ ------------------------------ TRIG_EMP_VIEW KKISHORE EMP_VIEW 1 row selected. SQL> create or replace view emp_view 2 as 3 select * from emp 4 / View created. SQL> select ut.trigger_name, ut.table_owner, ut.table_name 2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW' 3 / no rows selected Followup: the "or replace" is replacing the view and all related things. the create or replace preserves grants -- not the triggers. it is a "new view" ====>> so what should I do if i have view's with instead of triggers became invalid? what syntax can I use to alter the view without my trigger disappearing? if "create or replace" cannot be used, what syntax can i used?
Categories: DBA Blogs

CPU Utilization

Tue, 2024-04-30 13:46
Hi Tom, We run a multi-user OLTP system on Exadata Quarter Rack (Linux version). Though the system response time is consistent and is performing well. We observed Run queues with CPU utilization at 70% on both the nodes. What could be the reason? My understanding always has been that Run queues are formed only if the system utilization exceeds 100%. But in this case CPU on both the nodes is 65% utilized and 30% is free. But may be my understanding is flawed. Could you pls explain the concept of cpu utilization, run queues vis-avis cpu count, specially in OLTP workload?
Categories: DBA Blogs

Read consistency accross cursors in one procedure

Tue, 2024-04-30 13:46
I am looking for read consistency across multiple cursors in a packaged procedure. In the past I have opened the cursors that I wanted to be consistent at the start of the procedure, used them, and closed them at the end. I am starting to think that the time the first cursor takes to open, and resolve it's result set is making the subsequent cursor inconsistent, although this seems to have worked 99% of the time. Example: DECLARE CURSOR Cur1 IS SELECT SUM(X) FROM A WHERE SummaryFlag = 'N'; CURSOR Cur2 IS SELECT ROWID FROM A WHERE SummaryFlag = 'N'; BEGIN OPEN Cur1; OPEN Cur2; . FOR Rows IN Cur1 UPDATE ASummary . . FOR Rows IN Cur2 UPDATE A SET SummaryFlag = 'Y' WHERE RowId = Cur2.ROWID; I have had a few occasions where the summary table does not contain the information that has now been flagged as summarized. Does opening the cursors one right after the other guarantee a consistent result set, and if not why? Will using "ALTER TRANSACTION ISOLATION LEVEL SERIALIZABLE" fix this? How can I set my ISOLATION LEVEL and ROLLBACK segment at the same time? Thanks in advance.
Categories: DBA Blogs

Receiving Webhook Events from Stripe Payment Processing

Tue, 2024-04-30 13:46
Here is my challenge. I am developing an application that receives webhook notifications when events occur. I have successfully used the restful services functionality in Apex (SQL Workshop>Restful Services) to retrieve data at the first (root?) level successfully. From the "request" sent from stripe below I can use paramters to retrieve the id, object, api_version, created, etc. but fail to retrieve the data.object.id or anything nested at a lower level. (apologies if I am using wrong descriptors here). I have tried two approaches unsuccessfully: 1) a number of ways to identify the field as a parameter in the handler without success 2) retrieve the full json payload using :body, :body_text, :payload, :json_payload, etc. Any guidance on how I could identify specific fields lower in the hierarchy (example: the data.object.id with value "cus_PfPbVdZHzvJq0E" below) as a parameter? Or, any guidance on how I could grab the full json payload? Any guidance is appreciated. Dwain { "id": "evt_1Oq4mjJ861pVT3w2L6jYiwce", "object": "event", "api_version": "2018-02-28", "created": 1709432897, "data": { "object": { "id": "cus_PfPbVdZHzvJq0E", "object": "customer", "account_balance": 0, "address": null, "balance": 0, "created": 1709432896, "currency": null, "default_currency": null, "default_source": null, "delinquent": false, "description": null, "discount": null, "email": "mike@dc.com", "invoice_prefix": "2420987A", "invoice_settings": { "custom_fields": null, "default_payment_method": null, "footer": null, "rendering_options": null }, "livemode": false, "metadata": { }, "name": "mike", "next_invoice_sequence": 1, "phone": null, "preferred_locales": [ ], "shipping": null, "sources": { "object": "list", "data": [ ], "has_more": false, "total_count": 0, "url": "/v1/customers/cus_PfPbVdZHzvJq0E/sources" }, "subscriptions": { "object": "list", "data": [ ], "has_more": false, "total_count": 0, "url": "/v1/customers/cus_PfPbVdZHzvJq0E/subscriptions" }, "tax_exempt": "none", "tax_ids": { "object": "list", "data": [ ], "has_more": false, "total_count": 0, "url": "/v1/customers/cus_PfPbVdZHzvJq0E/tax_ids" }, "tax_info": null, "tax_info_verification": null, "test_clock": null } }, "livemode": false, "pending_webhooks": 1, "request": { "id": "req_KtKtxAnXwioenZ", "idempotency_key": "7263ed4a-0295-4a4e-a0b8-d7d3bf7f03b3" }, "type": "customer.created" }
Categories: DBA Blogs

How to call rest api which accept x-www-form-urlencoded in PL/SQL procedure in Apex

Mon, 2024-04-22 01:26
How to call rest api which accept x-www-form-urlencoded in PL/SQL procedure in Apex I am calling https://api.textlocal.in/docs/sendsms
Categories: DBA Blogs

Error in pl/sql code

Mon, 2024-04-22 01:26
When i try to run this code: DECLARE STUDENT_ID NUMBER; BEGIN -- Generate the next value for the sequence SELECT LMS_STUDENT_DETAILS_SEQ.nextval; -- Insert data into LMS_STUDENT_DETAILS table INSERT INTO LMS_STUDENT_DETAILS (STUDENT_ID, STUDENT_NAME, GENDER, DATE_OF_BIRTH, COURSE, CONTACT_NUMBER, DEPARTMENT) VALUES (STUDENT_ID, :P6_STUDENT_NAME, :P6_GENDER, :P6_DOB, :P6_COURSE, :P6_CONTACT_NO, :P6_DEPARTMENT); -- Insert data into LMS_BORROWER table INSERT INTO LMS_BORROWER (BORROWER_ID, ENTITY_OWNER_FK, ENTITY_TYPE) VALUES (LMS_BORROWER_SEQ.nextval, STUDENT_ID, 'STUDENT'); END; I faced this error: ORA-06550: line 1, column 106: PLS-00103: Encountered the symbol "DECLARE" when expecting one of the following: ( - + case mod new not null <an identifier> <a double-quoted delimited-identifier> <a bind variable> continue avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification> <a number> <a single-quoted SQL string> pipe <an alternatively-quoted string literal with character set
Categories: DBA Blogs

Mixed version dataguard

Mon, 2024-04-22 01:26
According Metalink note 785347.1 it seems possible to have a dataguard with primary 11.2 and standby 12.2 or even later but it is really very condensed. Could you please just confirm that 11.2 -> 12.2 is really possible? If so, what about 11.2 -> 19.x ? Or 12.2 -> 19.x ? Of course the idea is to upgrade to a later version with a very short downtime, after having switched to the newer version the old one would be discarded and the dataguard no longer used. Best regards Mauro
Categories: DBA Blogs

Is it a must to run pupbld.sql as system

Mon, 2024-04-22 01:26
Tom, I create databases then I run the catalog.sql and catproc.sql. Sometimes, I donot run pupbld.sql. Users may get warning message but they could login and work. But, My friend says that if pupbld.sql is not run as system then users will get the error messages and they cannot log into the database at all. Is it true. Is it a must to run the pupbld.sql. I could not see in the documentation, whether it is a must. If, it is a must, how I am able to login. Is this being called by anyother script like catalog.sql, catproc.sql. I grepped both the files for pupbld.sql. It does not exist. Please clarify. Regards Ravi
Categories: DBA Blogs

How to update a user defined database package in production

Fri, 2024-04-19 18:26
I have a user defined database package which is used quite heavily. When I need to update the code body, I will get several <code>ORA-04061: existing state of package body "CS.PACKAGE" has been invalidated ORA-04065: not executed, altered or dropped package body "CS.PACKAGE" ORA-06508: PL/SQL: could not find program unit being called: "CS.PACKAGE" ORA-06512: at "CS.PROCEDURE", line 228</code> We are using a connection pool. How do I put the changes into PACKAGE, without getting several of the above errors? I cannot control the use of the package, and it is very heavily used.
Categories: DBA Blogs

Explain plan estimate vs actual

Tue, 2024-04-16 17:06
Hi, I used explain plan and got the following results. Based on cost and time, does query 2 perform significantly better than query 1? The runtime for query 1 is approximately 1 minute and 40 seconds, but it shows 07:47:02. Why is the estimated time so different from the actual? Your help is much appreciated! Query 1: <code>------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 71730 | 2241K| 717M (1)| 07:47:02 | |* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 717M (1)| 07:47:02 | ------------------------------------------------------------------------------</code> Query 2: <code>------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 71730 | 2241K| 51028 (1)| 00:00:02 | |* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 51028 (1)| 00:00:02 | ------------------------------------------------------------------------------</code>
Categories: DBA Blogs

to_char a big number insert into database become scientific notation

Tue, 2024-04-16 17:06
Hi, Tom. Please see below script. <code>create table t0326 (id number, num varchar2(100)); declare v_empno number:=125854437665589038536841445202964995521300; begin dbms_output.put_line('v_empno -- ' || v_empno); dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno)); insert into t0326 values(10, to_char(v_empno)); commit; end; / v_empno -- 125854437665589038536841445202964995521300 to_char(v_empno) -- 125854437665589038536841445202964995521300 select * from t0326; ID NUM ---------- ------------------------------------------------------------ 10 1.2585443766558903853684144520296500E+41 declare v_empno number:=125854437665589038536841445202964995521300; v_s_empno varchar2(100); begin v_s_empno := to_char(v_empno); dbms_output.put_line('v_empno -- ' || v_empno); dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno)); dbms_output.put_line('v_s_empno -- '|| v_s_empno); insert into t0326 values(20, to_char(v_empno)); insert into t0326 values(30, v_s_empno); insert into t0326 values(40, to_char(v_empno, 'FM999999999999999999999999999999999999999999999999999999999')); commit; end; / v_empno -- 125854437665589038536841445202964995521300 to_char(v_empno) -- 125854437665589038536841445202964995521300 v_s_empno -- 125854437665589038536841445202964995521300 select * from t0326; ID NUM ---------- ----------------------------------------------------------------------- 10 1.2585443766558903853684144520296500E+41 20 1.2585443766558903853684144520296500E+41 30 125854437665589038536841445202964995521300 40 125854437665589038536841445202964995521300 </code> It display normal when "to_char(v_empno)" in dbms_output.put_line. But insert to database convert to scientific notation. I try two solutions to solve this problem. Please see below. 1. use a variable to store to_char(v_empno), then insert this varaible to database. 2. use to_char(xx, FMT) to control the format. I wonder why "to_char(v_empno)" in dbms_output.put_line is not scientific notation ? why add a temp variable could solve this problem ?
Categories: DBA Blogs

Catastrophic Database Failure -- Deletion of Control and Redo Files

Wed, 2024-04-10 14:26
We recently had a database failure that resulted in data loss after an Oracle 19.3.0.0.0 database had both both its control, and redo log files deleted. Please note that I am not a DBA, but simply an analyst that supports the system that sits on this Oracle database. Any amount of data loss is fairly serious, and I am wondering how we avoid this in the future. Before the control, and redo files were deleted, we had an event wherein the drive this database is on was full. This caused the database stop writing transactions, and disallowed users from accessing the application. Once space was made on this drive, the database operated normally for several hours until...the redo, and control files were deleted. What would have caused the control, and redo files to be deleted? In trying to figure out what happened, it was noted that if we had expanded the drive's memory in response to its becoming full, the later data loss would not have happened. Does Tom agree with that sentiment? Are these two events linked (disk drive nearly full and later data loss), or are they symptomatic of two different things?
Categories: DBA Blogs

Is 'SELECT * FROM :TABLE_NAME;' available?

Tue, 2024-04-09 20:06
Is 'SELECT * FROM :TABLE_NAME;' available?
Categories: DBA Blogs

Is fragmentation an issue ?

Mon, 2024-04-08 07:06
Hai all, I have 1000 number of tables. some of the tables got delete rows and updated the fragmentaion is created. How to determine which tables are fragmented ?
Categories: DBA Blogs

19c doesn't allow truncation of data that is longer in length of column's char(40) definition

Mon, 2024-04-08 07:06
We have an application that has been written to insert a variable that is char(50) into a column that is defined as char(40). In Oracle 11g (I know this is very old) it would merely truncate the last 10 characters without issue. However, Oracle 19c doesn't allow this and raises an exception (which I believe should've always been the case). Where can I find documentation of this restriction and when it was changed and is there away around this other than changing the program code? Oracle 11 truncated that extra 10 characters in the below statemt ADBBGNX_ADDRESS_LINE_1 := agentrecord.producerrec.businessAddressLine1; Oracle 19 throws an exception with a NULL error status.
Categories: DBA Blogs

Pages