DBA Blogs
I was trying to tune a MySQL query this week. I ran the same query against Oracle with the same data and got a much faster runtime on Oracle. I couldn’t get MySQL to do a range scan on the column that Oracle was doing it on. So, I just started barely scratching the surface with a simple test of when MySQL will use an index versus a full table scan in a range query. In my test MySQL always uses an index except on extreme out of range conditions. This is funny because in my real problem query it was the opposite. But I might as well document what I found for what it’s worth. I haven’t blogged much lately.
Here is my testcase and its output:
https://www.bobbydurrettdba.com/uploads/mysqlindexuserangequeries.zip
This is on 8.0.26 as part of an AWS Aurora MySQL RDS instance with 2 cores and 16 gigabytes of RAM.
I created a simple test table and put 10485760 rows in it:
create table test
(a integer NOT NULL AUTO_INCREMENT,
b integer,
PRIMARY KEY (a));
The value of b is always 1 and a ranges from 1 to 10878873.
This query uses a range query using the index:
select
sum(b)
from
test
where
a > -2147483648;
This query uses a full table scan:
select
sum(b)
from
test
where
a > -2147483649;
The full scan is slightly faster.
Somehow when you are 2147483650 units away from the smallest value of a the MySQL optimizer suddenly thinks you need a full scan.
There are a million more tests I could do like things with a million variables, but I thought I might as well put this out there. I’m not really any the wiser but it is a type of test that might be worth mentioning.
Bobby
I've just uploaded a new video on monitoring DataGuard where the Standby is a Single Instance environment for a 2-node RAC Cluster.
The script used in the demo are in this ZIP (script files with extension TXT)
Hi Tom,
We run a multi-user OLTP system on Exadata Quarter Rack (Linux version). Though the system response time is consistent and is performing well. We observed Run queues with CPU utilization at 70% on both the nodes. What could be the reason?
My understanding always has been that Run queues are formed only if the system utilization exceeds 100%. But in this case CPU on both the nodes is 65% utilized and 30% is free.
But may be my understanding is flawed.
Could you pls explain the concept of cpu utilization, run queues vis-avis cpu count, specially in OLTP workload?
I am looking for read consistency across multiple cursors in a packaged procedure. In the past I have opened the cursors that I wanted to be consistent at the start of the procedure, used them, and closed them at the end. I am starting to think that the time the first cursor takes to open, and resolve it's result set is making the subsequent cursor inconsistent, although this seems to have worked 99% of the time.
Example:
DECLARE
CURSOR Cur1 IS
SELECT SUM(X) FROM A WHERE SummaryFlag = 'N';
CURSOR Cur2 IS
SELECT ROWID FROM A WHERE SummaryFlag = 'N';
BEGIN
OPEN Cur1;
OPEN Cur2;
.
FOR Rows IN Cur1
UPDATE ASummary
.
.
FOR Rows IN Cur2
UPDATE A SET SummaryFlag = 'Y' WHERE RowId = Cur2.ROWID;
I have had a few occasions where the summary table does not contain the information that has now been flagged as summarized.
Does opening the cursors one right after the other guarantee a consistent result set, and if not why? Will using "ALTER TRANSACTION ISOLATION LEVEL SERIALIZABLE" fix this? How can I set my ISOLATION LEVEL and ROLLBACK segment at the same time?
Thanks in advance.
Got information from your archives, BUT solution is not provided or there's no solution???
Archive : "Why the trigger disappears... May 28, 2003
Reviewer: Kamal Kishore from New Jersey, USA "
Hi Tom,
After you re-create the view definition using CREATE OR REPLACE (maybe to change its condition), the trigger on the view disappears. Is this expected behaviour?
SQL> create or replace view emp_view
2 as
3 select * from emp
4
SQL> /
View created.
SQL> create or replace trigger trig_emp_view
2 instead of insert or update on emp_view
3 for each row
4 begin
5 Null ;
6 end ;
7 /
Trigger created.
SQL> show errors
No errors.
SQL> select ut.trigger_name, ut.table_owner, ut.table_name
2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW'
3 /
TRIGGER_NAME TABLE_OWNER TABLE_NAME
------------------------------ ------------------------------
------------------------------
TRIG_EMP_VIEW KKISHORE EMP_VIEW
1 row selected.
SQL> create or replace view emp_view
2 as
3 select * from emp
4 /
View created.
SQL> select ut.trigger_name, ut.table_owner, ut.table_name
2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW'
3 /
no rows selected
Followup:
the "or replace" is replacing the view and all related things. the create or
replace preserves grants -- not the triggers. it is a "new view"
====>> so what should I do if i have view's with instead of triggers became invalid? what syntax can I use to alter the view without my trigger disappearing?
if "create or replace" cannot be used, what syntax can i used?
It took me 25 years to get used to Oracle using 1-based indexing in pretty much all API's.
How the rather new json_array_t data structure used a 0-based indexing and drives me crazy.
Is there any reason behind this "strange anomaly" or did someone just want to drive people crazy?
The following example only returns 2 and 3 because it must be written "FOR i IN 0 .. c_json.get_size() - 1 LOOP ":
<code>
DECLARE
c_json CONSTANT json_array_t := json_array_t('[1, 2, 3]');
BEGIN
FOR i IN 1 .. c_json.get_size() LOOP
dbms_output.put_line(c_json.get_number(i));
END LOOP;
END;
/
</code>
Here is my challenge. I am developing an application that receives webhook notifications when events occur. I have successfully used the restful services functionality in Apex (SQL Workshop>Restful Services) to retrieve data at the first (root?) level successfully. From the "request" sent from stripe below I can use paramters to retrieve the id, object, api_version, created, etc. but fail to retrieve the data.object.id or anything nested at a lower level. (apologies if I am using wrong descriptors here).
I have tried two approaches unsuccessfully:
1) a number of ways to identify the field as a parameter in the handler without success
2) retrieve the full json payload using :body, :body_text, :payload, :json_payload, etc.
Any guidance on how I could identify specific fields lower in the hierarchy (example: the data.object.id with value "cus_PfPbVdZHzvJq0E" below) as a parameter? Or, any guidance on how I could grab the full json payload?
Any guidance is appreciated.
Dwain
{
"id": "evt_1Oq4mjJ861pVT3w2L6jYiwce",
"object": "event",
"api_version": "2018-02-28",
"created": 1709432897,
"data": {
"object": {
"id": "cus_PfPbVdZHzvJq0E",
"object": "customer",
"account_balance": 0,
"address": null,
"balance": 0,
"created": 1709432896,
"currency": null,
"default_currency": null,
"default_source": null,
"delinquent": false,
"description": null,
"discount": null,
"email": "mike@dc.com",
"invoice_prefix": "2420987A",
"invoice_settings": {
"custom_fields": null,
"default_payment_method": null,
"footer": null,
"rendering_options": null
},
"livemode": false,
"metadata": {
},
"name": "mike",
"next_invoice_sequence": 1,
"phone": null,
"preferred_locales": [
],
"shipping": null,
"sources": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/sources"
},
"subscriptions": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/subscriptions"
},
"tax_exempt": "none",
"tax_ids": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/tax_ids"
},
"tax_info": null,
"tax_info_verification": null,
"test_clock": null
}
},
"livemode": false,
"pending_webhooks": 1,
"request": {
"id": "req_KtKtxAnXwioenZ",
"idempotency_key": "7263ed4a-0295-4a4e-a0b8-d7d3bf7f03b3"
},
"type": "customer.created"
}
I have a table like below.
<code>create table t2 ( id varchar2(1),val number) ;
insert into t2 values ('a',1);
insert into t2 values ('a',2);
insert into t2 values ('a',3);
insert into t2 values ('a',4);
insert into t2 values ('b',1);
insert into t2 values ('b',2);
insert into t2 values ('b',3);
insert into t2 values ('c',1);
insert into t2 values ('c',2);
insert into t2 values ('c',4);
insert into t2 values ('d',1);
insert into t2 values ('d',2);</code>
we have to print o/p like below.
<code>id x
--- -------
a 1,2,3,4
b 1,2,3
c 1,2,4
d 1,2</code>
this can achieve by below query
<code>select id,LISTAGG(val, ',') WITHIN GROUP (ORDER BY val ) as x
from t2
group by id</code>
Here x column is character datatype .But i need to convert this to varray of number / nested table of number ( not varray/nestedtable of character ).
i tried like below
<code>CREATE TYPE varchar_TT AS TABLE OF varchar(10);
with z as (
select id,varchar_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,varchar_TT('1,2') y
from t2
group by id )
select id , x ,y from z ;
o/p
----
id x y
---- ------------- ---------------
a C##SIVA.<b>VARCHAR_TT('1,2,3,4')</b>C##SIVA.VARCHAR_TT('1,2')
b C##SIVA.<b>VARCHAR_TT('1,2,3') </b>C##SIVA.VARCHAR_TT('1,2')
c C##SIVA.<b>VARCHAR_TT('1,2,4') </b>C##SIVA.VARCHAR_TT('1,2')
d C##SIVA.<b>VARCHAR_TT('1,2') </b>C##SIVA.VARCHAR_TT('1,2')</code>
if i add below condition , i am not getting any result .
<b>where y member of x ;</b>
so i tried to convert to number array .
<code>CREATE TYPE number_TT AS TABLE OF number;
with z as (
select id,number_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,number_TT(1,2) y
from t2
group by id )
select id , x ,y from z ;
ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.</code>
1 ) Here i need o/p like below to use <b><u>member</u></b> and <b><u>submultiset</u></b> conditions.
<code>o/p
----
id x y
---- ------------- ---------------
a C##SIVA.<b>NUMBER_TT(1,2,3,4) </b>C##SIVA.NUMBER_TT(1,2)
b C##SIVA.<b>NUMBER_TT(1,2,3) </b>C##SIVA.NUMBER_TT(1,2)
c C##SIVA.<b>NUMBER_TT(1,2,4) </b>C##SIVA.NUMBER_TT(1,2)
d C##SIVA.<b>NUMBER_TT(1,2) </b>C##SIVA.NUMBER_TT(1,2)
select varchar_tt('1,2') x ,number_TT(1,2) y from dual;
x y
-------------------- ----------------
C##SIVA.VARCHAR_TT('1,2') C##SIVA.NUMBER_TT(1,2)</code>
Please let me know how to convert character array to number array .
2)
<code>create table t4 ( id VARCHAR2(1) , val number_tt )
NESTED TABLE val STORE AS val_2 ;</code>
How to insert into t4 table from t2 ?
expected o/p query of t4 table should be like...
Tom,
I create databases then I run the catalog.sql and catproc.sql. Sometimes, I donot run pupbld.sql. Users may get warning message but they could login and work.
But, My friend says that if pupbld.sql is not run as system then users will get the error messages and they cannot log into the database at all. Is it true.
Is it a must to run the pupbld.sql. I could not see in the documentation, whether it is a must. If, it is a must, how I am able to login.
Is this being called by anyother script like catalog.sql, catproc.sql. I grepped both the files for pupbld.sql. It does not exist.
Please clarify.
Regards
Ravi
According Metalink note 785347.1 it seems possible to have a dataguard with primary 11.2 and standby 12.2 or even later but it is really very condensed.
Could you please just confirm that 11.2 -> 12.2 is really possible?
If so, what about 11.2 -> 19.x ?
Or 12.2 -> 19.x ?
Of course the idea is to upgrade to a later version with a very short downtime, after having switched to the newer version the old one would be discarded and the dataguard no longer used.
Best regards
Mauro
How to call rest api which accept
x-www-form-urlencoded in PL/SQL procedure in Apex
I am calling
https://api.textlocal.in/docs/sendsms
When i try to run this code:
DECLARE
STUDENT_ID NUMBER;
BEGIN
-- Generate the next value for the sequence
SELECT LMS_STUDENT_DETAILS_SEQ.nextval;
-- Insert data into LMS_STUDENT_DETAILS table
INSERT INTO LMS_STUDENT_DETAILS (STUDENT_ID, STUDENT_NAME, GENDER, DATE_OF_BIRTH, COURSE, CONTACT_NUMBER, DEPARTMENT)
VALUES (STUDENT_ID, :P6_STUDENT_NAME, :P6_GENDER, :P6_DOB, :P6_COURSE, :P6_CONTACT_NO, :P6_DEPARTMENT);
-- Insert data into LMS_BORROWER table
INSERT INTO LMS_BORROWER (BORROWER_ID, ENTITY_OWNER_FK, ENTITY_TYPE)
VALUES (LMS_BORROWER_SEQ.nextval, STUDENT_ID, 'STUDENT');
END;
I faced this error:
ORA-06550: line 1, column 106: PLS-00103: Encountered the symbol "DECLARE" when expecting one of the following: ( - + case mod new not null <an identifier> <a double-quoted delimited-identifier> <a bind variable> continue avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification> <a number> <a single-quoted SQL string> pipe <an alternatively-quoted string literal with character set
I have a user defined database package which is used quite heavily. When I need to update the code body, I will get several
<code>ORA-04061: existing state of package body "CS.PACKAGE" has been invalidated
ORA-04065: not executed, altered or dropped package body "CS.PACKAGE"
ORA-06508: PL/SQL: could not find program unit being called: "CS.PACKAGE"
ORA-06512: at "CS.PROCEDURE", line 228</code>
We are using a connection pool. How do I put the changes into PACKAGE, without getting several of the above errors?
I cannot control the use of the package, and it is very heavily used.
This video is a hands-on step-by-step primer about how to quantize any model using Hugging Face Quanto which is a versatile pytorch quantization toolkit.
!pip install transformers==4.35.0 !pip install quanto==0.0.11 !pip install torch==2.1.1 !pip install sentencepiece==0.2.0
model_name = "google/flan-t5-small"
import sentencepiece as spm from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small")
input_text = "Meaning of happiness is " input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0]))
from helper import compute_module_sizes module_sizes = compute_module_sizes(model) print(f"The model size is {module_sizes[''] * 1e-9} GB")
from quanto import quantize, freeze import torch
quantize(model, weights=torch.int8, activations=None) freeze(model)
module_sizes = compute_module_sizes(model) print(f"The model size is {module_sizes[''] * 1e-9} GB")
input_text = "Meaning of happiness is " input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0]))
This video is a hands-on step-by-step primer about how to use RAG with Open AI File Search. OpenAI now supports RAG which means that now you can attach your own files and custom data to OpenAI assistant and talk to your documents with GPT4. Make sure you have installed latest version of openai on your local system: pip install openai --upgrade also make sure to have data.txt in the same folder as your script. from openai import OpenAI client = OpenAI() assistant = client.beta.assistants.create( name="Personal Assistant", instructions="You are an empathetic. Use you knowledge base to answer questions.", model="gpt-4-turbo", tools=[{"type": "file_search"}], ) # Create a vector store caled "Personal Data" vector_store = client.beta.vector_stores.create( name="Personal Data", expires_after={ "anchor": "last_active_at", "days": 1 } ) # Ready the files for upload to OpenAI file_paths = ["data.txt"] file_streams = [open(path, "rb") for path in file_paths] # Use the upload and poll SDK helper to upload the files, add them to the vector store, # and poll the status of the file batch for completion. file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store.id, files=file_streams ) # You can print the status and the file counts of the batch to see the result of this operation. print(file_batch.status) print(file_batch.file_counts) assistant = client.beta.assistants.update( assistant_id=assistant.id, tool_resources={"file_search": {"vector_store_ids": [vector_store.id]}}, ) thread = client.beta.threads.create( tool_resources={"file_search": {"vector_store_ids": [vector_store.id]}}, messages=[ { "role": "user", "content": "Who is Fahd Mirza?", } ] ) # The thread now has a vector store with that file in its tool resources. print(thread.tool_resources.file_search) run = client.beta.threads.runs.create_and_poll( thread_id=thread.id, assistant_id=assistant.id ) messages = list(client.beta.threads.messages.list(thread_id=thread.id, run_id=run.id)) message_content = messages[0].content[0].text annotations = message_content.annotations citations = [] for index, annotation in enumerate(annotations): message_content.value = message_content.value.replace(annotation.text, f"[{index}]") if file_citation := getattr(annotation, "file_citation", None): cited_file = client.files.retrieve(file_citation.file_id) citations.append(f"[{index}] {cited_file.filename}") print(message_content.value) print("\n".join(citations))
Function calling in AI simply means that you can call external APIs from within your AI-powered application. Whenever you read that a model can do function calling, it means that it can take a natural language query of user and convert it to a function call. Then you can execute that function call to your API endpoint to get the data, and give it to LLM as additional context and get more grounded latest response as per your application requirement.
import openai def Get_Beach_Patrol_Info(name, location): if 'Bondi' in name: print(f"The beach {name} at {location} is patrolled.") elif 'Marley' in name: print(f"The beach {name} of {location } is unpatrolled.") else: print(f"Beach not found.") def get_gorilla_response(prompt="", model="gorilla-openfunctions-v2", functions=[]): openai.api_key = "EMPTY" # Hosted for free with from UC Berkeley openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1" try: completion = openai.ChatCompletion.create( model="gorilla-openfunctions-v2", temperature=0.0, messages=[{"role": "user", "content": prompt}], functions=functions, ) return completion.choices[0] except: print("error occurred.") beach_custom_functions = [ { 'name': 'Get_Beach_Patrol_Info', 'description': 'Get name and location of beach from the body of the input text', 'parameters': { 'type': 'object', 'properties': { 'name': { 'type': 'string', 'description': 'Name of the beach' }, 'location': { 'type': 'string', 'description': 'Location where beach is located.' } } } } ] beach_1_description="""Bondi Beach is an iconic destination located in Sydney, Australia, renowned for its golden sands and lively atmosphere. It attracts surfers, sunbathers, and tourists alike, offering stunning ocean views and a vibrant coastal culture.""" beach_2_description="""Marley Beach (also known as Big Marley Beach) is a stunning beach in the upper Royal National Park, on the outskirts of southern Sydney, Australia. The beach is one of eleven beaches located within the territory of the Royal National Park.""" beach_description=[beach_1_description,beach_2_description] for i in beach_description: response=get_gorilla_response(prompt=i, functions=beach_custom_functions) func=response.message.content print(func) eval(func)
I frequently use Oracle wallet for my databases to store certificates. For one site, I started receiving '401 authorization required' error and it seemed that cert was expired. If you are after TLDR; then following shows how we fixed it.
SQL> select utl_http.request('https://dummysite.com',null,'file:/scripts/src/oracle/wallets','XXXXXXXXXX') from dual; UTL_HTTP.REQUEST('https://dummysite.com',NULL, -------------------------------------------------------------------------------- 401 Authorization Required SQL> select utl_http.request('https://dummysite.com',null,'file:/d01/oracle/prod2db/10.2.0/appsutil/wallet', null) from dual; UTL_HTTP.REQUEST('https://dummysite.com',NULL, -------------------------------------------------------------------------------- 401 Authorization Required
All I did was I added the root certificates in the wallet located at /scripts/src/oracle/wallets for fixing this issue.
For details:
First identify which http call from within the application or database was failing:
SQL> select distinct sql_text,sql_id from v$sql where upper(sql_text) like '%UTL%HTTP%';
SQL_TEXT -------------------------------------------------------------------------------- SQL_ID ------------- select distinct sql_text,sql_id from v$sql where upper(sql_text) like '%UTL%HTT P%' gcgfyfty86c84
SQL> select distinct sql_text,sql_id from v$sqlarea where upper(sql_text) like '%UTL%HTTP%';
SQL_TEXT -------------------------------------------------------------------------------- SQL_ID ------------- select distinct sql_text,sql_id from v$sql where upper(sql_text) like '%UTL%HTT P%' gcgfyfty86c84
select distinct sql_text,sql_id from v$sqlarea where upper(sql_text) like '%UTL %HTTP%' 34x064xsfa0dy
and then locate the cert destination and put root certificate from there from your CA.
[oracle@oraapps3 ~]$ cd /scripts/src/oracle/wallets [oracle@oraapps3 wallets]$ ls -lrt total 96 -rw-r--r-- 1 oracle oinstall 1115 Aug 22 2023 test2.TESTint.net.cert -rw-r--r-- 1 oracle oinstall 837 Aug 22 2023 test2 -rw------- 1 oracle oinstall 14765 Jan 25 13:47 ewallet.p12 [oracle@oraapps3 wallets]$ orapki wallet display -wallet . Enter wallet password: Requested Certificates: User Certificates: Trusted Certificates: Subject: OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US Subject: CN=Entrust.net Certification Authority (2048),OU=(c) 1999 Entrust.net Limited,OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.),O=Entrust.net Subject: CN=Entrust.net Secure Server Certification Authority,OU=(c) 2000 Entrust.net Limited,OU=www.entrust.net/SSL_CPS incorp. by ref. (limits liab.),O=Entrust.net
Hi, Tom. Please see below script.
<code>create table t0326 (id number, num varchar2(100));
declare
v_empno number:=125854437665589038536841445202964995521300;
begin
dbms_output.put_line('v_empno -- ' || v_empno);
dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno));
insert into t0326 values(10, to_char(v_empno));
commit;
end;
/
v_empno -- 125854437665589038536841445202964995521300
to_char(v_empno) -- 125854437665589038536841445202964995521300
select * from t0326;
ID NUM
---------- ------------------------------------------------------------
10 1.2585443766558903853684144520296500E+41
declare
v_empno number:=125854437665589038536841445202964995521300;
v_s_empno varchar2(100);
begin
v_s_empno := to_char(v_empno);
dbms_output.put_line('v_empno -- ' || v_empno);
dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno));
dbms_output.put_line('v_s_empno -- '|| v_s_empno);
insert into t0326 values(20, to_char(v_empno));
insert into t0326 values(30, v_s_empno);
insert into t0326 values(40, to_char(v_empno, 'FM999999999999999999999999999999999999999999999999999999999'));
commit;
end;
/
v_empno -- 125854437665589038536841445202964995521300
to_char(v_empno) -- 125854437665589038536841445202964995521300
v_s_empno -- 125854437665589038536841445202964995521300
select * from t0326;
ID NUM
---------- -----------------------------------------------------------------------
10 1.2585443766558903853684144520296500E+41
20 1.2585443766558903853684144520296500E+41
30 125854437665589038536841445202964995521300
40 125854437665589038536841445202964995521300 </code>
It display normal when "to_char(v_empno)" in dbms_output.put_line. But insert to database convert to scientific notation.
I try two solutions to solve this problem. Please see below.
1. use a variable to store to_char(v_empno), then insert this varaible to database.
2. use to_char(xx, FMT) to control the format.
I wonder why "to_char(v_empno)" in dbms_output.put_line is not scientific notation ? why add a temp variable could solve this problem ?
Hi,
I used explain plan and got the following results. Based on cost and time, does query 2 perform significantly better than query 1? The runtime for query 1 is approximately 1 minute and 40 seconds, but it shows 07:47:02. Why is the estimated time so different from the actual? Your help is much appreciated!
Query 1:
<code>------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71730 | 2241K| 717M (1)| 07:47:02 |
|* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 717M (1)| 07:47:02 |
------------------------------------------------------------------------------</code>
Query 2:
<code>------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71730 | 2241K| 51028 (1)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 51028 (1)| 00:00:02 |
------------------------------------------------------------------------------</code>
Pages
|