Feed aggregator

The Many Ways To Sign-In To Oracle Cloud

Michael Dinh - Thu, 2019-11-28 09:09

When signing up for Oracle Cloud, Cloud Account Name must be provided.

Login to Oracle Cloud Infrastructure Classic using Cloud Account Name
https://myservices-CloudAccountName.console.oraclecloud.com

Login to Oracle Cloud Infrastructure (most simple and need to enter Cloud Account Name)

https://www.oracle.com/cloud/sign-in.html

Login to Oracle Cloud Infrastructure Region (need to enter Cloud Account Name/Cloud Tenant)

https://console.us-phoenix-1.oraclecloud.com

Login to Oracle Cloud Infrastructure Region using Cloud Account Name
https://console.us-phoenix-1.oraclecloud.com/?tenant=CloudAccountName

If you find more, the please let me know.

Good Minecraft Usernames

VitalSoftTech - Thu, 2019-11-28 08:53

You are all set to play Minecraft, things get whacky when you cannot think of a good Minecraft username. We are here to help! Minecraft is an award-winning videogame developed by Markus Presson, a Swedish game developer. It is much like a videogame in a sandbox because the player can create, modify, and destroy his […]

The post Good Minecraft Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

Multiple Node.js Applications on Oracle Always Free Cloud

Andrejus Baranovski - Thu, 2019-11-28 08:26
What if you want to host multiple Oracle JET applications? You can do it easily on Oracle Always Free Cloud. The solution is described in the below diagram:


You should wrap Oracle JET application into Node.js and deploy it to Oracle Compute Instance through Docker container. This is described in my previous post - Running Oracle JET in Oracle Cloud Free Tier.

Make sure to create Docker container with a port different than 80. To host multiple Oracle JET apps, you will need to create multiple containers, each assigned with a unique port. For example, I'm using port 5000:

docker run -p 5000:3000 -d --name appname dockeruser/dockerimage

This will map standard Node port 3000 to port 5000, accessible internally within Oracle Compute Instance. We can direct external traffic from port 80 to port 5000 (or any other port, mapped with Docker container) through Nginx.

Install Nginx:

yum install nginx

Go to Nginx folder:

cd etc/nginx

Edit configuration file:

nano nginx.conf

Add context root configuration for Oracle JET application, to be directed to local port 5000:

location /invoicingdemoui/ {
     proxy_pass http://127.0.0.1:5000/;
}

To allow HTTP call from Nginx to port 5000 (or other port), run this command (more about it on Stackoverflow):

setsebool -P httpd_can_network_connect 1

Reload Nginx:

systemctl reload nginx

Check Nginx status:

systemctl status nginx

That's all. Your Oracle JET app (demo URL) now accessible from the outside:

SOA Suite 12c Stumbling on parsing Ampersands

Darwin IT - Thu, 2019-11-28 03:51

Yesterday I ran into a problem parsing xml in BPEL. A bit of context: I get messages from a JMS queue, that I read 'Opaque'. Because I want to be able to dispatch the messages to different processes based on a generic WSDL, but with a different payload.

So after the Base64 Decode, for which I have a service, I need to parse the content to XML. Now, I used to use the oraext:parseEscapedXML() function for it. This function is known to have bugs, but I traced that down to BPEL 10g. And I'm on 12.2.1.3 now.

Still I got exceptions as:

<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected ';'.
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>

Or:

<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected name instead of .
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>

It turns out that it was due to ampersands (&amp;) in the message. The function oraext:parseEscapedXML() is known to stumble on that.

A work around is suggested in a forum on Integration Cloud Service (ICS).  It suggests to use oraext:get-content-as-string() first. And feed the contents to oraext:parseEscapedXML(). It turns out that that helps, although I had to fiddle around with xpath expressions, to get the correct child element, since I also got the parent element surrounding the part I actually wanted to parse.

But then I found this blog, suggesting that it was replaced by oraext:parseXML() in 12c (I found that it is actually introduced in 11g).

Strange that I didn't find this earlier. Digging deeper down memory-lane, I think I must have seen the function before.  However, it shows I'm still learning all the time.

Enabling, disabling, and validating foreign key constraints in PostgreSQL

Yann Neuhaus - Thu, 2019-11-28 01:39

Constraints are in important concept in every realtional database system and they guarantee the correctness of your data. While constraints are essentials there are situations when it is required to disable or drop them temporarily. The reason could be performance related because it is faster to validate the constraints at once after a data load. The reason could also be, that you need to load data and you do not know if the data is ordered in such a way that all foreign keys will validate for the time the data is loaded. In such a case it is required to either drop the constraints or to disable them until the data load is done. Validation of the constraints is deferred until all your data is there.

As always lets start with a simple test case, two tables, the second one references the first one:

postgres=# create table t1 ( a int primary key
postgres(#                 , b text
postgres(#                 , c date
postgres(#                 );
CREATE TABLE
postgres=# create table t2 ( a int primary key
postgres(#                 , b int references t1(a)
postgres(#                 , c text
postgres(#                 );
CREATE TABLE

Two rows, for each of them:

postgres=# insert into t1 (a,b,c) values(1,'aa',now());
INSERT 0 1
postgres=# insert into t1 (a,b,c) values(2,'bb',now());
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (1,1,'aa');
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (2,2,'aa');

Currently the two tiny tables look like this:

postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 
 c      | date    |           |          | 
Indexes:
    "t1_pkey" PRIMARY KEY, btree (a)
Referenced by:
    TABLE "t2" CONSTRAINT "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# 

Lets assume we want to load some data provided by a script. As we do not know the ordering of the data in the script we decide to disable the foreign key constraint on the t2 table and validate it after the load:

postgres=# alter table t2 disable trigger all;
ALTER TABLE

The syntax might look a bit strange but it actually does disable the foreign key and it would have disabled all the foreign keys if there would have been more than one. It becomes more clear when we look at the table again:

postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)
Disabled internal triggers:
    "RI_ConstraintTrigger_c_16460" AFTER INSERT ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_ins"()
    "RI_ConstraintTrigger_c_16461" AFTER UPDATE ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_upd"()

“ALL” means, please also disable the internal triggers that are responsible for verifying the constraints. One restriction of the “ALL” keyword is, that you need to be superuser for doing that. Trying that with a normal user will fail:

postgres=# create user u1 with login password 'u1';
CREATE ROLE
postgres=# \c postgres u1
You are now connected to database "postgres" as user "u1".
postgres=> create table t3 ( a int primary key
postgres(>                 , b text
postgres(>                 , c date
postgres(>                 );
CREATE TABLE
postgres=> create table t4 ( a int primary key
postgres(>                 , b int references t3(a)
postgres(>                 , c text
postgres(>                 );
CREATE TABLE
postgres=> alter table t4 disable trigger all;
ERROR:  permission denied: "RI_ConstraintTrigger_c_16484" is a system trigger
postgres=> 

What you could do as a regular user to do disable the user triggers:

postgres=> alter table t4 disable trigger user;
ALTER TABLE

As I do not have any triggers it of course does not make much sense. Coming back to our initial t1 and t2 tables. As the foreign key currently is disabled we can insert data into the t2 table that would violate the constraint:

postgres=# select * from t1;
 a | b  |     c      
---+----+------------
 1 | aa | 2019-11-27
 2 | bb | 2019-11-27
(2 rows)

postgres=# select * from t2;
 a | b | c  
---+---+----
 1 | 1 | aa
 2 | 2 | aa
(2 rows)

postgres=# insert into t2 (a,b,c) values (3,3,'cc');
INSERT 0 1
postgres=# 

There clearly is no matching parent for this row in the t1 table but the insert succeeds, as the foreign key is disabled. Time to validate the constraint:

postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)
Disabled internal triggers:
    "RI_ConstraintTrigger_c_16460" AFTER INSERT ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_ins"()
    "RI_ConstraintTrigger_c_16461" AFTER UPDATE ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_upd"()

postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# alter table t2 validate constraint t2_b_fkey;
ALTER TABLE
postgres=# 

Surprise, surprise, PostgreSQL does not complain about the invalid row. Why is that? If we ask the pg_constraint catalog table the constraint is recorded as validated:

postgres=# select convalidated from pg_constraint where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
 convalidated 
--------------
 t
(1 row)

It is even validated if we disable it once more:

postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# select convalidated from pg_constraint where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
 convalidated 
--------------
 t
(1 row)

That implies that PostgreSQL will not validate the constraint when we enable the internal triggers and PostgreSQL will not validate all the data as long as the status is valid. What we really need to do for getting the constraint validated is to invalidate it before:

postgres=# alter table t2 alter CONSTRAINT t2_b_fkey not valid;
ERROR:  ALTER CONSTRAINT statement constraints cannot be marked NOT VALID

Seems this is not the correct way of doing it. The correct way of doing it is to drop the foreign key and then re-create it with status invalid:

postgres=# alter table t2 drop constraint t2_b_fkey;
ALTER TABLE
postgres=# delete from t2 where a in (3,4);
DELETE 2
postgres=# alter table t2 add constraint t2_b_fkey foreign key (b) references t1(a) not valid;
ALTER TABLE
postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a) NOT VALID

Now we have the desired state and we can insert our data:

postgres=# insert into t2(a,b,c) values (3,3,'cc');
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(3) is not present in table "t1".

Surprise, again. Creating a “not valid” constraint only tells PostgreSQL not to scan the whole table to validate if all the rows are valid. For data inserted or updated the constraint is still checked, and this is why the insert fails.

What options do we have left? The obvious one is this:

  • Drop all the foreign the keys.
  • Load the data.
  • Re-create the foreign keys, but leave them invalid to avoid the costly scan of the tables. Now data will be validated.
  • Validate the constraints when there is less load on the system.

Another possibility would be this:

postgres=# alter table t2 alter constraint t2_b_fkey deferrable;
ALTER TABLE
postgres=# begin;
BEGIN
postgres=# set constraints all deferred;
SET CONSTRAINTS
postgres=# insert into t2 (a,b,c) values (3,3,'cc');
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (4,4,'dd');
INSERT 0 1
postgres=# insert into t1 (a,b,c) values (3,'cc',now());
INSERT 0 1
postgres=# insert into t1 (a,b,c) values (4,'dd',now());
INSERT 0 1
postgres=# commit;
COMMIT

The downside of this is that this only works until the next commit, so you have to do all your work in one transaction. The key point of this post is, that the assumption that following will validate your data is false:

postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# insert into t2 (a,b,c) values (5,5,'ee');
INSERT 0 1
postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# 

This will only validate new data but it does not guarantee that all the rows satisfy the constraint:

postgres=# insert into t2 (a,b,c) values (6,6,'ff');
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(6) is not present in table "t1".
postgres=# select * from t2 where b = 5;
 a | b | c  
---+---+----
 5 | 5 | ee
(1 row)

postgres=# select * from t1 where a = 5;
 a | b | c 
---+---+---
(0 rows)

Finally: There is another way of doing it, but this directly updates the pg_constraint catalog table and this is something you should _not_ do (never update internal tables directly!):

postgres=# delete from t2 where b = 5;
DELETE 1
postgres=# delete from t2 where b = 5;
DELETE 1
postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# insert into t2 values (5,5,'ee');
INSERT 0 1
postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# update pg_constraint set convalidated = false where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
UPDATE 1
postgres=# alter table t2 validate constraint t2_b_fkey;
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(5) is not present in table "t1".
postgres=# 

In this case the constraint will be fully validated as it is recorded as invalid in the catalog.

Conclusion: Do not rely on assumptions, always carefully test your procedures.

Cet article Enabling, disabling, and validating foreign key constraints in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle Files Lawsuit against Secretary of Labor Eugene Scalia and Department of Labor plus OFCCP and OFCCP Director Craig Leen Challenging the Unauthorized U.S. Department of Labor Enforcement and Adjudicative Regime

Oracle Press Releases - Wed, 2019-11-27 13:42
Press Release
Oracle Files Lawsuit against Secretary of Labor Eugene Scalia and Department of Labor plus OFCCP and OFCCP Director Craig Leen Challenging the Unauthorized U.S. Department of Labor Enforcement and Adjudicative Regime

Washington, D.C.—Nov 27, 2019

Oracle today filed a lawsuit in U.S. District Court in Washington, D.C. challenging the legality of the system of enforcement and adjudication established by the U.S. Department of Labor and its Office of Federal Contract Compliance Programs (OFCCP) for discrimination claims against government contractors. The complaint alleges that this system was not authorized by Congress or the President and contravenes statutory authorities. 

Oracle’s complaint states that under the current system, claims against government contractors are not prosecuted in federal courts with a federal jury. Instead, the Department of Labor itself serves as investigator, prosecutor, judge, jury and appellate court, usurping the role of the Equal Employment Opportunity Commission (EEOC), the Department of Justice and the Courts.

“Oracle filed this case because it is being subjected to an unlawful enforcement action by the Labor Department utilizing a process with no statutory foundation whatsoever,” said Ken Glueck, executive vice president, Oracle.

Congress expressly declined to give agencies, such as EEOC, the broad and unfettered authority that the Department of Labor has assumed for itself to investigate, prosecute and adjudicate lawsuits entirely in-house. This system violates the U.S. Constitution and acts of Congress, including the Civil Rights Act of 1964 and the Equal Employment Opportunity Act of 1972.

Oracle recognizes the vital importance of a lawful system that investigates and prosecutes discrimination by employers, including government contractors. But the existing extra-statutory Department of Labor process results in arbitrary enforcement actions against the many employers who qualify as federal contractors, often with no evidentiary foundation and designed to do nothing more than extort concessions under a system lacking any semblance of due process. 

“It is apparent that neither Solicitor of Labor Kate O’Scannlain nor OFCCP Director Craig Leen is prepared to move back to a system where merits trump optics. Oracle brings this suit because the leadership at the Department of Labor has failed to restore balance to an unrestrained bureaucracy,” said Glueck.

“We believe strongly in maintaining a level playing field in the workplace for all of our employees and remain proud of our firm commitment to equality in our workforce. This lawsuit seeks to ensure that employers such as Oracle are likewise entitled to a level playing field when the government asserts claims of discrimination. That has not been the case with the OFCCP, resulting in enforcement actions that are meritless and defamatory to Oracle, its executives, and other government contractors,” said Glueck

In addition to today’s lawsuit, Oracle fully intends to defend itself against the Department of Labor’s baseless enforcement action set to begin trial on December 5. The government’s case rests on false allegations, cherry-picked statistics, and erroneous and radical theories of the law. The Labor Department’s nonsensical claims underscore the need for the federal courts to declare the Department of Labor’s current enforcement system unconstitutional.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

A schema and a user are not the same in PostgreSQL

Yann Neuhaus - Wed, 2019-11-27 11:56

When people with an Oracle background attend our PostgreSQL DBA Essentials training there is always a bit of confusion about schemas and users. In Oracle a schema and a user is a one to one relationship and there is no real distinction between a user and a schema. In PostgreSQL the situation is different: All the objects a user is creating are created in a specific schema (or namespace). Other users may or may not have permissions to work with this objects or even to create new objects in a specific schema. Compared to Oracle there is one layer more.

The hierarchy in PostgreSQL is this:

|-------------------------------------------|---|
| PostgreSQL instance                       |   |
|-------------------------------------------| U |
|     Database 1      |     Database 2      | S |
|---------------------|---------------------| E |
| Schema 1 | Schema 2 | Schema 1 | Schema 2 | R |
|----------|----------|----------|----------| S |
| t1,t2,t3 | t1,t2,t3 | t1,t2,t3 | t1,t2,t3 |   |
-------------------------------------------------

What this little ASCII image shall tell you: Users (and roles) in PostgreSQL are global objects and are not defined in a database but on the instance level. Schemas are created by users in a specific database and contain database objects. Where a lot of people get confused with is this:

postgres@centos8pg:/home/postgres/ [pgdev] psql -X postgres
psql (13devel)
Type "help" for help.

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# 

Nothing in this create table statement references a schema but according to what I just said above all objects must go to a schema. Where did this table go then? Each PostgreSQL database comes with a public schema by default and if you do not explicitly specify a schema the new object will go there. There are several ways of asking PostgreSQL for the schema of a given table but probably the two most used ones are these (the first one is asking a catalog view and the second one is using a psql shortcut)

postgres=# select schemaname from pg_tables where tablename = 't1';
 schemaname 
------------
 public
(1 row)

postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 

Btw: The public schema is a special schema in PostgreSQL and you should either remove it or at least revoke permission from public on the public schema. Check here for more information on that.

So what happens when you drop the public schema and try to create a table afterwards?

postgres=# drop schema public cascade;
NOTICE:  drop cascades to table t1
DROP SCHEMA
postgres=# create table t1 ( a int );
ERROR:  no schema has been selected to create in
LINE 1: create table t1 ( a int );
                     ^
postgres=# 

As we do not have a single schema anymore:

postgres=# \dn
List of schemas
 Name | Owner 
------+-------
(0 rows)

… PostgreSQL has no idea where to put the table. At this point it should already be clear that a schema in PostgreSQL is not the same as a user. We are connected as the “postgres” user, but we do not have a schema to create our objects in. Lets create the first schema and right afterwards the same table as above:

postgres=# create schema my_schema;
CREATE SCHEMA
postgres=# create table t1 ( a int );
ERROR:  no schema has been selected to create in
LINE 1: create table t1 ( a int );
                     ^
postgres=# 

… again PostgreSQL is not able to create the table. The question is: Why did it work when then public schema was there? We did not specify the public schema above but it worked. This is where the search_path comes into the game:

postgres=# show search_path;
   search_path   
-----------------
 "$user", public
(1 row)

postgres=# 

By default the search_path contains you current username and public. As none of these schemas exist right now the create table statement will fail. There are two options to fix that. Either use the fully qualified name:

postgres=# create table my_schema.t1 ( a int );
CREATE TABLE
postgres=# \d my_schema.t1
               Table "my_schema.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 

… or adjust the search_path so that your preferred schema comes first:

postgres=# set search_path = 'my_schema',"$user",public;
SET
postgres=# show search_path ;
        search_path         
----------------------------
 my_schema, "$user", public
(1 row)

postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# \d t2
               Table "my_schema.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 

postgres=# 

That all might look a bit strange at the beginning, especially when you are used to Oracle, but it also provides great flexibility:

  • A user can create many different schemas, no need to create separate users
  • A user can grant permission to create objects in one of his schemas to someone else
  • You can logically divide your application
  • (no, there are no synonyms in PostgreSQL)
  • The are default privileges you can use

Cet article A schema and a user are not the same in PostgreSQL est apparu en premier sur Blog dbi services.

Goal seek excel Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-11-27 10:33

Microsoft Excel is one of the most widely used applications for data analysis. Professionals from financial industry to information technology and researchers to academics make extensive use of Excel on a day to day basis. There is no denying the fact that the range of features that Microsoft Excel offers is truly amazing. This is […]

The post Goal seek excel Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Scalable Distributed BI Architecture

Dylan's BI Notes - Tue, 2019-11-26 19:15
Incorta, a scalable distributed BI system...
Categories: BI & Warehousing

What Does PDF Stand For?

VitalSoftTech - Tue, 2019-11-26 09:52

The acronym PDF is widely used while converting, downloading documents, and browsing on the internet. You might have wondered what does PDF stand for. In today’s day and age, jargonization has enveloped every part of our life, including writing and browsing. It is, however, very convenient to use these acronyms instead of pronouncing the entire […]

The post What Does PDF Stand For? appeared first on VitalSoftTech.

Categories: DBA Blogs

How To Create Database on Oracle’s Gen2 Cloud (OCI)

Online Apps DBA - Tue, 2019-11-26 00:25

[Video] How To Create Database on Oracle’s Gen2 Cloud (OCI) Show notes at https://k21academy.com/36 Check the video by Oracle ACE & Cloud Expert Atul Kumar from Team K21 Academy in which he discussed the step by step procedure to create a database on Oracle Cloud(OCI) & what each configuration means during the DB Provisioning on […]

The post How To Create Database on Oracle’s Gen2 Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

CVE-2019-2638, CVE-2019-2633, Oracle Payday Vulnerabilities - AppDefend Protection

Two Oracle E-Business Suite security vulnerabilities (CVE-2019-2638, CVE-2019-2633) fixed in April 2019 Oracle Critical Patch Update (CPU) have been recently publicized. These vulnerabilities allow an attacker to execute arbitrary SQL statements in the Oracle E-Business Suite data that can result in complete compromise of the environment including fraudulent transactions, changing of bank accounts, and circumvention of application security controls. Integrigy’s AppDefend, the application firewall for Oracle E-Business Suite, is the only solution that provides virtual patching for and proactive defense against these vulnerabilities.

These two vulnerabilities are in the Oracle E-Business Suite (EBS) TCF Server, which provides services to the professional Forms interface for a limited set of Forms. TCF Server is implemented and enabled in all versions of Oracle E-Business Suite including 11i, 12.0, 12.1, and 12.2. It can not be disabled without a customization to Oracle EBS.

TCF Server is a servlet running as part of the standard Oracle EBS web application server and communicates using HTTP or HTTPS between the Forms Java client and the web application server. For R12, the servlet is available at the URL /OA_HTML/AppsTCFServer. It uses a proprietary application-level protocol to communicate between the Forms client and server.

The risk is that unlike most Oracle EBS SQL injection vulnerabilities that only allow for fragments of SQL statements to be appended to standard Oracle EBS SQL statements being executed, these security bugs allow execution of complete SQL statements as the Oracle EBS APPS database account. When evaluating the risk of these vulnerabilities in your environment, it is important to differentiate between external access to the Oracle EBS environment through the Internet when modules like iSupplier, iStore, and iRecruitment are being used and internal access from only your internal network. The risk from external access is critical and should be immediately addressed. The internal risk is still high and dependent on the security posture of your internal network. It is important to realize that non-Oracle EBS aware web application firewalls, database security tools, and other network security products will not provide any protection from successful exploitation of these vulnerabilities.

Integrigy AppDefend is the only solution that provides virtual patching for and proactive defense against these TCF Server vulnerabilities as well other Oracle EBS security vulnerabilities. Integrigy recognized the potential issues with TCF Server and even the first release of AppDefend for R12 in 2007 blocked external access to the TCF Server by default.

AppDefend provides multiple layers of protection against TCF Server vulnerabilities as follows -

  1. Blocks all access to TCF Server externally (since 2007).
  2. Enforces Oracle EBS access control for TCF Server allowing only authorized EBS users to access to the TCF Server (since 2018).
  3. Whitelists the functions accessible through TCF Server (since 2018).
  4. Blocks specific vulnerabilities in TCF Server (2018, 2019).
  5. Advanced SQL injection protection optimized specifically for Oracle EBS will detect and block most of the SQL statements used in TCF Server and other 0-day attacks. (since 2007).

If you do not have AppDefend, applying the latest Oracle Critical Patch Update for Oracle EBS will remediate these specific vulnerabilities and for external sites it is critical that the Oracle EBS URL Firewall is implemented as documented in Appendix E of My Oracle Support Note ID 380490.1. However, these solutions will not protect you prior to applying the security patches or against future TCF Server vulnerabilities and other Oracle EBS 0-day attacks.

Please let us know if you have any questions regarding the latest Oracle EBS security vulnerabilities at info@integrigy.com.

SQL Injection, Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Video : Oracle REST Data Services (ORDS) : OAuth Implicit

Tim Hall - Mon, 2019-11-25 02:21

In today’s video we look at the OAuth Implicit flow for Oracle REST Data Services.

This goes together with a previous video about first-party authentication here.

Both videos are based on parts of this article.

There are loads of other ORDS articles here.

The star of today’s video is Bob Rhubart, who amongst other things is the host of the Oracle Groundbreakers Podcast.

Cheers

Tim…

Video : Oracle REST Data Services (ORDS) : OAuth Implicit was first posted on November 25, 2019 at 9:21 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Believer | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Sun, 2019-11-24 07:08

My Son Dharun Performing at Improviser Music Studio

Believer



Please subscribe to our channel Improviser Music Studio

How to run a java software which needs RXTX on a raspberry pi

Dietrich Schroff - Sat, 2019-11-23 12:35
In my last posting i wrote about migrating my aircraft noise measurement station to alpine linux. There i had some problems with getting the RXTX library for Java running on a linux, which uses musl and not GNU libc6.

Why does my java application require RXTX? As stated on the RXTX page:
RXTX is a Java library, using a native implementation (via JNI), providing serial and parallel communication for the Java Development Toolkit (JDK).Now i wanted to move to a raspberry pi. But this runs on ARM and the RXTX is only provided for x86/x64 systems.

But there is another way: ser2net


With this proxy /dev/ttyUSB0 can be mapped to a tcp port and java can access this without using RXTX.

DOAG 2019: Cloud or Kubernetes on premise and CI/CD pipeline at the top of (my) interest

Yann Neuhaus - Fri, 2019-11-22 11:33

The DOAG 2019 is just over now and if I check the subject of sessions I was attending, I have the feeling the Cloud and Kubernetes on-premise deployments and CI/CD pipelines were the top subjects interesting people or may be just me !!

I started by a devOps session and then followed a lot of Kubernetes or containers related sessions. The three following sessions are those I directly think about when trying to summarize.

“The Pillars of Continuous Delivery” – Lykle Thijssen

A very interesting session about continuous delivery.

“How to create a solid foundation for Continuous Delivery, based on four pillars: Agile, Microservices, Automation and Cloud. Lykle explained how agile development can reduce time to production by short development cycles, how Microservices make the deployment process easier with less dependencies, how automation can improve test, build and deployment and how cloud can help with the necessary infrastructure. It all hangs together and without a solid foundation, there is always the risk of building a house of cards.”

“Containers Demystified” – Jan Karremans and Daniel Westermann

“If you are looking at implementing Dev/Ops, infrastructure as a code, if you want to adopt the Cloud or seeking to go with Microservices, you will find Containers on you path. After the opportunities that virtualization brought, Containers are the next best thing! Also (and perhaps specifically) looking at databases with containers, it brings specific challenges. Where containers infrastructures are built to fail and database rely on persistency, you have challenges.”

Watch My Services: Prometheus in Kubernetes – Thorsten Wussow

“The monitoring of microservices is a special challenge. In the lecture the problem of a monitoring of microservices will be discussed. Furthermore, various products are briefly considered, with which one can still carry out such a monitoring. Then a demo shows how to set up monitoring in a Kubernetes cluster with the help of Prometheus and Grafana and what to look for.”

A Kubernetes cluster was used in the Oracle Infrastructure Cloud to demonstrate the deployment and the configuration of Promotheus and Grafana.
With Thorsten, it always looks simple when following the demonstration. Now it is is time to implement it.

Kubernetes im Vergleich: Google, AWS, Oracle, Microsoft – Michael Schulze & Borys Neselovskyi

“The presentation gives a brief overview of the architecture and construction of Kubernetes. In addition, useful application scenarios are presented in selected use cases. Later in the talk, we will compare existing Kubernetes cloud solutions from leading vendors (Google, AWS, Oracle and Microsoft). Here, a customer scenario is used as the basis for the comparison. The following criteria play a role: installation, maintenance, performance, monitoring and, of course, costs.”

A nice comparison between the different cloud solutions.

There were some other very interesting sessions but I will not list all of them now.

DOAG 2019 is over. See you next year there in Nuremberg

Cet article DOAG 2019: Cloud or Kubernetes on premise and CI/CD pipeline at the top of (my) interest est apparu en premier sur Blog dbi services.

Oracle Database 18c: New Features asmcmd

Michael Dinh - Fri, 2019-11-22 11:11
============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion
ASM version         : 19.4.0.0.0
[oracle@ol7-19-rac1 ~]$ 

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd -V
asmcmd version 19.4.0.0.0
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showpatches
---------------
List of Patches
===============
29401763
29517242
29517247
29585399
29834717
29850993
29851014

[oracle@ol7-19-rac1 ~]$ asmcmd showpatches -l
Oracle ASM release patch level is [2037353368] and 
the complete list of patches [29401763 29517242 29517247 29585399 29834717 29850993 29851014 ] have been applied on the local node. 
The release patch string is [19.4.0.0.0].
[oracle@ol7-19-rac1 ~]$

============================================================
OLD:
============================================================

### MISSING from OLD are previous 19.3 version:
29517247; ACFS RELEASE UPDATE 19.3.0.0.0	
29585399; OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242; Database Release Update : 19.3.0.0.190416 (29517242)

[oracle@ol7-19-rac1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2037353368] and 
the complete list of patches [29401763 29517242 29517247 29585399 29834717 29850993 29851014 ] have been applied on the local node. 
The release patch string is [19.4.0.0.0].
[oracle@ol7-19-rac1 ~]$

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29851014;ACFS RELEASE UPDATE 19.4.0.0.0 (29851014)
29850993;OCW RELEASE UPDATE 19.4.0.0.0 (29850993)
29834717;Database Release Update : 19.4.0.0.190716 (29834717)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion --softwarepatch
ASM version         : 19.4.0.0.0
Software patchlevel : 2037353368
[oracle@ol7-19-rac1 ~]$

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node ol7-19-rac1 is [2037353368].
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion --active
Oracle ASM active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. 
The cluster active patch level is [2037353368].
[oracle@ol7-19-rac1 ~]$

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. 
The cluster active patch level is [2037353368].
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion --releasepatch
ASM version         : 19.4.0.0.0
Information about release patchlevel is unavailable since no ASM instance connected

[oracle@ol7-19-rac1 ~]$ asmcmd
ASMCMD> showversion --releasepatch
ASM version         : 19.4.0.0.0
Release patchlevel  : 2037353368
ASMCMD>

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2037353368] and 
the complete list of patches [29401763 29517242 29517247 29585399 29834717 29850993 29851014 ] have been applied on the local node. 
The release patch string is [19.4.0.0.0].
[oracle@ol7-19-rac1 ~]$

Basically, the new features for asmcmd existed for crsctl query and use the ones best suited.

Nightly process slowing down.

Tom Kyte - Thu, 2019-11-21 11:50
Hi. We have a process that runs every night that is beginning to slow down and we need some help to find the resources to analyse the problem. In our setup, unfortunately both transaction schemas and warehousing (statistics) schemas are kept on...
Categories: DBA Blogs

Golden Signals for performance

Tom Kyte - Thu, 2019-11-21 11:50
Hello Tom I am a member of an site reliability team (SRE) and we are trying to develop SRE "golden signals" for an Oracle 11g/12c database. These signal are: 1) Throughput 2) Latency 3) Response Time 4) Error rate (not sure about this one...
Categories: DBA Blogs

Difference between DBRM and IORM

Tom Kyte - Thu, 2019-11-21 11:50
Dear Sir, Please help me to know below points. 1)difference between DBRM and IORM? 2)difference between ACFS,ADVM and DBFS? Thanks Pradeep
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator