Oracle ojdbc driver download for linux






















When you are in edit mode, you can list the users in the login module using the jaas:user-list :. The jaas:user-add command adds a new user and the password in the currently edited login module:. To "commit" your change here the user addition , you have to execute the jaas:update command:. On the other hand, if you want to rollback the user addition, you can use the jaas:cancel command.

Like for the jaas:user-add command, you have to use the jaas:update to commit your change or jaas:cancel to rollback :. The jaas:group-add command assigns a group and eventually creates the group to a user in the currently edited login module:.

The jaas:group-delete command removes a user from a group in the currently edited login module:. The jaas:group-role-add command adds a role in a group in the currently edited login module:. The jaas:group-role-delete command removes a role from a group in the currently edited login module:. The jaas:update command commits your changes in the login module backend.

If the encryption. With encryption enabled, the password are encrypted at the first time a user logs in. The encryption.

The default is "basic" which just supports basic digesting of the password, without salting. This is not secure for production environments. A more secure alternative is "jasypt", which supports digesting with salting. However, the most secure alternative which should be used in production is "spring-security-crypto", which supports modern password digest algorithms such as "argon2" and "bcrypt".

The default is SHA since Karaf 4. When the "spring-security-crypto" encryption. The possible values are hexadecimal or base The default value is hexadecimal. For the SSH layer, Karaf supports the authentication by key, allowing to login without providing the password. NB: you can provide a passphrase with -N option to ssh-keygen but, in that case, you will need to enter the passphrase to allow the SSH client to use the key. You copy the key from karaf.

Here, we define that feature:list and feature:info commands can be executed by users with viewer role, whereas the feature:install and feature:uninstall commands can only be executed by users with admin role. Note that users in the admin group will also have viewer role, so will be able to do everything.

Only the users with admin role can execute feature:install , feature:uninstall , feature:start , feature:stop and feature:update commands. Only the users with admin role can execute jaas:update command. Only the users with admin role can execute kar:install and kar:uninstall commands. Only the users with admin role can execute shell:edit , shell:exec , shell:new , and shell:java commands. Only the users with admin role can execute system:property and system:shutdown commands. Users with manager role can call system:start-level above , otherwise admin role is required.

Also users with viewer role can obtain the current start-level. You can fine tune the command RBAC support by editing the karaf. It contains the "global" ACL definition. This ACL limits the setStartLevel , start , stop , and update operations for system bundles for only users with admin role. The other operations can be performed by users with the manager role. This ACL limits the change on jmx.

This ACL limits the invocation of the canInvoke operation for the users with viewer role. This ACL limits the changes on jmx. This ACL limits the invocation of the gc operation for only users with the manager role.

The Apache Karaf WebConsole is not available by default. To enable it, you have to install the webconsole feature:. All users with the admin role can logon to the WebConsole and perform any operations. The JVM imposes some restrictions about the use of such jars: they have to be signed and be available on the boot classpath. While this approach works fine, it has a global effect and requires you to configure all your servers accordingly.

In addition, you may want to provide access to the classes from those providers from the system bundle so that all bundles can access those. It can be done by modifying the org. Apache Karaf provides Docker resources allowing you to easily create your own image and container. You can create your own docker image.

If you want to build the Karaf image run:. It could be on the same local machine where Apache Karaf instance is running or a remote Docker machine. In a nutshell, you just have to enable the tcp transport connector for the docker daemon. You have to do it using the -H option on dockerd :. The docker:images command or images operation on the DockerMBean lists the available images on docker:. The docker:rmi command or rmi operation on the DockerMBean removes an image from docker:.

Provisioning is a specific way of creating a container based on the current running Karaf instance: it creates a Docker container using the current running Apache Karaf instance karaf. You can then reuse this container to create a Docker image and to duplicate the container on another Docker backend via dockerhub.

The docker:top command displays the current running processes in an existing container:. OBR achieves the first goal by providing a service that can automatically install a bundle, with its deployment dependencies, from a bundle repository.

This makes it easier for people to experiment with existing bundles. OBR is an optional Apache Karaf feature. You have to install the obr feature to use OBR service:.

It means that Apache Karaf can use a OBR repository for the installation of the bundles, and during the installation of the features. The OBR repository contains all bundles. Thanks to that, when you install "deploy" in OBR wording a bundle using the OBR service, it looks for all bundles providing the capabilities matching the bundle requirements.

It will automatically install the bundles needed for the bundle. If the feature specifies obr in the resolver attribute, Apache Karaf can use the OBR service to construct the list of bundles to install with the features. The obr:url-add command expects an url argument.

You have to reload the repository. Instead of using the URL, you can use the repository index as displayed by the obr:url-list command. To do so, you have to use the -i option:. The obr:list command lists all bundles available on the registered OBR repositories:.

The obr:info command displays the details about bundles available on the OBR service. Especially, it provides details about capabilities and requirements of bundles. It means that you have to use the following command to see the info about the wrapper core bundle with version 4. The obr:source command check the source URL in the OBR metadata for a given bundle, and download the sources on a target folder:. It means that you have to use the following command to download the source of wrapper core bundle with version 4.

The obr:resolve command displays the resolution output for a given set of requirements. Actually, it show the bundles providing the capabilities to match the requirements. Optionally, the obr:resolve command can deploy the bundles as the obr:deploy command does. For instance, to know the OBR bundle resolving the org. The obr:find command is similar to the obr:resolve one. It displays the bundles resolving the provided requirements, with details. For instance, to find the OBR bundle providing the org.

The obr:deploy command installs a bundle from the OBR repository, including all bundles required to satisfy the bundle requirements. By default, the bundles are just installed, not started. You can use the -s option to start the bundles. It means that you have to use the following command to deploy the wrapper core bundle with version 4.

The obr:start command does the same as obr:deploy -s command. It installs the bundle and all required bundles to satisfy the requirements and starts all installed bundles. It means that you have to use the following command to deploy and start the wrapper core bundle with version 4. The Bundles attribute provides a tabular data containing all bundles available on the registered OBR repositories.

The bundles are not automatically started. If start is true , the bundles are automatically started. If deployOptional is true , even the optional requirements will be resolved by the OBR service meaning installing more bundles to satisfy the optional requirements. If you want to use Apache Felix Http Service, you have to install felix-http feature:. The Pax Web whiteboard extender is an enhancement of the http feature. So use the following command to install:.

For commands take a look at the command section in the webcontainer chapter. The installation of the webconsole feature automatically installs the war feature. The default port used by the WebContainer is Note: the connector is actually bound only when at least a servlet or webapplication is using it. However note that this is not a good idea from a security point of view. The first step is to create a keystore containing a server certificate.

For instance the following command creates a keystore with a self-signed certificate:. By default, Apache Karaf bind these ports on all network interfaces 0. You can config the host property to bind on a specific network interface with a given IP address. Bundle-ManifestVersion: 2 defines that the bundle follows the rules of R4 specification. Bundle-SymbolicName specifies a unique, non-localizable name for the bundle.

This name should be based on the reverse domain name convention. WAB can be deployed directly in Apache Karaf, for instance, by dropping the archive in the deploy folder, or using the bundle:install command. For instance, the Apache Karaf manual documentation is available as a WAB that you can deploy directly in a running instance:. It allows you to expose remote web applications in Karaf. You can use the Karaf ProxyService programmatically, or via the corresponding shell commands and MBeans.

The State is the current state of the Servlet Deployed or Undeployed. For instance, if you installed the Apache Karaf manual WAR file as described previously, you can see it with web:list :. The web:stop command stops a web application in the WebContainer. The web:stop command expects a id argument corresponding to the bundle ID as displayed by the web:list command. The web:start command starts a web application in the WebContainer. The web:start command expects a id argument corresponding to the bundle ID as displayed by the web:list command.

The http:proxy-add registers a new HTTP proxy. By default, two balancing policies are available: random selecting one URL randomly and round-robin selecting one URL after another one.

You can see the balancing policies available using http:proxy-balancing-list command:. The Servlets attribute provides a tabular data providing the list of deployed Servlets including:. State is the current Servlet state Deployed or Undeployed. The ProxyBalacingPolicies attribute provides the collection of balancing policies available. The WebBundles attribute provides a tabular data providing the list of deployed Web Applications including:. The OSGi service registry can be viewed as an example of such a system.

Apache Karaf also supports regular JNDI, including a directoy system where you can register name bindings, sub-context, etc. The jndi:names command lists all JNDI names. The jndi:names command accepts an optional context argument to list names on the given context. The jndi:names lists only names the full qualified name. It means that the empty JNDI sub-contexts are not displayed. To display all JNDI sub-contexts empty or not , you can use the jndi:contexts command.

However, the transaction feature is installed as a transitive dependency when installing enterprise features like jdbc or jms features for instance.

The installation of the transaction feature installs a new configuration: org. A recoverable resource is a transactional object whose state is saved to stable storage if the transaction is committed, and whose state can be reset to what it was at the beginning of the transaction if the transaction is rolled back.

At commit time, the transaction manager uses the two-phase XA protocol when communicating with the recoverable resource to ensure transactional integrity when more than one recoverable resource is involved in the transaction being committed. Transactional databases and message brokers like Apache ActiveMQ are examples of recoverable resources. A recoverable resource is represented using the javax. If a transaction has a lifetime longer than this timeout a transaction exception is raised and the transaction is rollbacked.

Default is 10 minutes. Combined with the aries. The transaction feature defines the configuration in memory by default. It means that changes that you can do will be lost in case of Apache Karaf restart. The jdbc:ds-factories command lists the available datasource factories, with the available driver. For instance, once you installed the jdbc feature, you can install pax-jdbc-postgresql feature, providing the PostgreSQL datasource factory:.

You can see there the JDBC driver name and class that you can use in the jdbc:ds-create command. The jdbc:ds-create command automatically creates a datasource definition file by leveraging pax-jdbc. Another example using PostgreSQL driver class name you can find with jdbc:ds-factories command :.

The jdbc:ds-info command provides details about a JDBC datasource. The data source may be specified using name or service. Typically, you can use the jdbc:execute command to create tables, insert values into tables, etc. The jdbc:query command is similar to the jdbc:execute one but it displays the query result.

It automatically creates a blueprint XML file in the deploy folder containing the JMS connection factory definition corresponding to the type that you mentioned. Currently only activemq and webspheremq types are supported. It allows you to define the pooling framework you want to use: pooledjms, narayana, transx. In the previous example, we assume that you previously installed the activemq-broker feature.

The connectionfactory-test. The jms:delete command deletes a JMS connection factory. The name argument is the name that you used at creation time:. If the JMS broker requires authentication, you can use the -u --username and -p --password options. Depending of the JMS connection factory type, this command may not work. For now, the command works only with Apache ActiveMQ. For instance, to send a message containing Hello World in the MyQueue queue, you can do:. If you want to consume only some messages, you can define a selector using the -s --selector option.

The jms:consume command just consumes so removes messages from a JMS queue. If you want to see the details of messages, you can use the jms:browse command. By default, the messages properties are not displayed.

You can use the -v --verbose option to display the properties:. If you want to browse only some messages, you can define a selector using the -s --selector option. If the JMS broker requires an authentication, you can use the -u --username and -p --password options.

The jms:move command consumes all messages from a JMS queue and send it to another one. For instance, to move all messages from MyQueue queue to AnotherQueue queue, you can do:. The Connectionfactories attribute provides the list of all JMS connection factories names. Apache OpenJPA. The hibernate feature installs the jpa feature with the Hibernate persistence engine:. The eclipselink feature installs the jpa featue with the ElipseLink persistence engine:. By default, the feature:repo-add openejb command will install the latest OpenEJB version available.

However, this project is now "deprecated", and all resources from KarafEE will move directly to Apache Karaf soon. See the developer guide for that.

You just have to install the feature corresponding to the CDI container and version that you want to use. Apache Karaf natively provides a failover mechanism. Karaf provides failover capability using either a simple lock file or a JDBC locking mechanism. In both cases, a container-level lock system allows bundles to be preloaded into the slave Karaf instance in order to provide faster failover performance.

This container-level lock system allows bundles installed on the master to be preloaded on the slave, in order to provide faster failover performance. When a first instance starts, if the lock is available, it takes the lock and become the master.

If a second instance starts, it tries to acquire the lock. As the lock is already hold by the master, the instance becomes a slave, in standby mode not active. A slave periodically check if the lock has been released or not. The Apache Karaf instances share a lock on the filesystem.

Here, we use the filesystem lock. All instances have to share the same lock. The master instance holds the lock by locking a table in the database. If the master loses the lock, a waiting slave gains access to the locking table, acquire the lock on the table and starts.

The JDBC driver to use is the one corresponding to the database used for the locking system. Apache Karaf supports lock systems for specific databases see later for details. Karaf will first try to find the table as specified in this property, and if not found, it will try the table name in lower and upper case.

If the connection to the database is lost, the master instance tries to gracefully shutdown, allowing a slave instance to become the master when the database is back. The former master instance will require a manual restart. Apache Karaf supports Oracle database for locking. The lock implementation class name to use is org.

The karaf. It means that you must manually create the Oracle database instance first before using the lock mechanism. Apache Karaf supports Apache Derby database for locking. Apache Karaf supports container-level locking. It allows bundles to be preloaded into the slave instance.

Thanks to that, switching to a slave instance is very fast as the slave instance already contains all required bundles. All bundles with an ID equals or lower to this start level will be started in that Karaf instance. A cold standby instance. Core bundles are not loaded into container. Slaves will wait until lock acquired to start server.

A hot standby instance. Core bundles are loaded into the container. Slaves will wait until lock acquired to start user level bundles. The console will be accessible for each slave instance at this level.

Using hot standby means that the slave instances are running and bound to some ports. So, if you use master and slave instances on the same machine, you have to update the slave configuration to bind the services ssh, JMX, etc on different port numbers. By cluster, we mean several active instances, synchronized with each other. However, Apache Karaf Cellar can be installed to provide cluster support. You have to provide an username and password to access the JMX layer.

Default is By default it uses the karaf realm. Whenever a JMX operation is invoked, the roles of the user are checked against the required roles for this operation. The relevant configuration is prefixed with jmx. For instance, specific configuration for a MBean with the object name foo. More generic configurations can be placed in the domain e.

Apache Karaf looks for required roles using the following process. The most specific configuration file is tried first. In this configuration, Apache Karaf looks for a:. Regex match for the invocation, e. If any of the above match, the search stops and the associated roles are used. Signature match for the invocation, e. Method name match for the invocation, e. A method name wildcard match, e. Bundles with ID between 0 and 49 can be stopped only by an admin , other bundles can be stopped by a manager :.

This MBean can be used by management clients monitoring tools, etc to decide whether to show certain MBeans or operations to the end user.

Karaf provides a jolokia feature, ready to install you just need a Http Service installed first :. You can find details on the Jolokia website and in the documentation. Apache Karaf Decanter provides a complete monitoring solution including data history, turnkey dashboards, SLA and alerting support. The WebConsole is extensible via a plugins system. Some applications can add new pages to the WebConsole.

For instance, Apache Karaf Cellar provides additional pages to administrate cluster groups, nodes, etc. To enable the Apache Karaf WebConsole, you just have to install the webconsole feature:. NB: you have to install a Http Service first as requirement, either http or felix-http feature. The webconsole feature automatically installs the http feature see the [WebContainer section webcontainer] for details. As the Apache Karaf WebConsole uses the security framework, an username and password will be prompted.

By default, only users with the admin role are allowed to logon to the Apache Karaf WebConsole. Apache Karaf provides an optional Scheduler which provides a Service Listener which listens for Runnable Services and schedules their execution, based on the service properties.

This Scheduler implementation uses the Quartz Scheduler library to understand cron-like expressions. To enable the Apache Karaf Scheduler, you just have to install the scheduler feature:.

The scheduler feature automatically installs the scheduler command group, too:. Defines the period for a job. The period is expressed in seconds. This property needs to be of type Long. Define if a periodically job should be scheduled immediate. Default is to not startup immediate, the job is started the first time after the period has expired. This property needs to be of type Boolean. This example uses Declarative Services to register a Service of Type "org. Job" so that it is recognized by the Scheduler Service.

Alternatively, jobs can be registered as type "Runnable" in a more API neutral way. Recommendation: Before using this low level api for registering jobs, consider using the whitebox approach instead. You can change the scheduling of an existing job using scheduler:reschedule command.

By the default, the Apache Karaf scheduler uses a memory storage for jobs. You can setup several Karaf instances scheduler to use a shared job storage. You can create the datasource to this database, using the regular Karaf jdbc commands. For instance, to setup a DataSource for a remote Derby database, you can do:. Then several Karaf instances scheduler will share the same JDBC job store and can work in a "clustered" way. Apache Karaf default configuration is sized for small to medium needs and to work on most machines.

Generally speaking, a good approach for tuning is to enable -verbose:gc and use tools like VisualVM to identify the potential memory leaks, and see the possible optimisation of the spaces and GC. It will give more resources to Apache Karaf, and avoid some perm space saturation if you do a lot of bundles refresh. Depending of the use cases and usage of the heap , the new GC1 algorithm can give good performance improvements:.

If the current number of connections is greater than this value, the status is "low on resources". In that case, a new connection timeout is applied: the lowResourceMaxIdleTime. For instance, if you use Apache Camel inside Apache Karaf, Camel components can create a lot of threads.

Most of the time, the default configuration in Apache Karaf is fine and works in most of the use cases. However, some times, you may not want to use the packages provided by the JVM, but the same packages provided by a bundle. If you encounter issues like performance degradations, weird behaviour, it could be helpful to have a kind of snapshot about the current activity of the container.

Instead of a zip archive, you can create the dump exploded in a directory using the -d --directory option:. The bundle:dynamic-import command allows you to enable or disable the dynamic import of a given bundle:. The purpose of dynamic import is to allow a bundle to be wired up to packages that may not be known about in advance.

The shell:stack-traces-print command prints the full stack trace when the execution of a command throws an exception. You can enable or disable this behaviour by passing true to enable or false to disable to the command on the fly:.

The bundle:tree-show command shows the bundle dependency tree based on the wiring information of a given single bundle ID. The bundle:watch command enables watching the local Maven repository for updates on bundles. If the bundle file changes in the Maven repository, Apache Karaf will automatically update the bundle. The bundle:watch allows you to configure a set of URLs to monitor. All bundles whose location matches the given URL will be automatically updated.

It avoids needing to manually update the bundles or even copy the bundle to the system folder. Once you have assigned a value to a variable, you can display this value using the "resolved" variable name:. We access to the bundle variable an array containing all bundles , and we want to display the bundle location for the bundle at the index 1 in the bundle array. The shell has a built-in expression parser. Returns one value if the condition evaluates to true or the other if it evaluates to false.

Converts an angle measured in degrees to an approximately equivalent angle measured in radians. Converts an angle measured in radians to an approximately equivalent angle measured in degrees. For instance, you can access to the bundles context variables and send it as input to the grep command:. Apache Karaf console provides a set of implicit constants and variables that you can use in your script.

The variables starting with a that are defined as Function such as closures will be executed automatically:. It means that you can create an object using the new directive, and call methods on the objects:. This second example shows a script to wait for an OSGi service, up to a given timeout, and combine this script in other scripts:. As described in the users guide, Apache Karaf supports remote access to both the console by embedding a SSHd server and the management layer.

This SSH client can be in pure Java or in another language. It should contain the same welcome and prompt entries but those will be used for external clients connecting through ssh. As for console, you can use the following pom. The most generally useful features of the karaf-maven-plugin are exposed as packagings.

To use the packagings the pom or an ancestor must configure the karaf-maven-plugin with extensions:. The feature packaging verifies a features.

The kar packaging generates a features. Assembles a Karaf server based on the features descriptors and kar files listed as Maven dependencies. The karaf:commands-generate-help goal generates documentation containing Karaf commands help. It looks for Karaf commands in the current project class loader and generates the help as displayed with the --help option in the Karaf shell console.

The directory where the documentation output files are to be generated. The output format docbx, asciidoc, or conf of the commands documentation. Default value: docbx. The karaf-maven-plugin provides several goals to help you create and verify features XML descriptors as well as leverage your features to create a custom Karaf distribution. The karaf:features-generate-descriptor goal generates a features XML file based on the Maven dependencies. By default, it will follow Maven transitive dependencies, stopping when it encounters bundles already present in features that are Maven dependencies.

Specifies processing of feature repositories that are transitive Maven dependencies. If false, all features in these repositories become dependencies of the generated feature. If true, all features in these repositories are copied into the generated feature repository.

The start level for the bundles determined from Maven dependencies. This can be overridden by specifying the bundle in the source feature. The karaf:verify goal verifies and validates a features XML descriptor by checking if all the required imports for the bundles defined in the features can be matched to a provided export. By default, the plugin tries to add the Karaf core features standard and enterprise in the repositories set. The list of features to verify. If not specified, all features in the descriptors will be verified.

Consider using the karaf-assembly packaging which makes it easy to assemble a custom distribution in one step instead of this individual goal. The karaf:features-add-to-repository goal adds all the required bundles for a given set of features into directory. By default, the Karaf core features descriptors standard and enterprise are automatically included in the descriptors set.

Default value: false. The karaf:kar goal assembles a KAR archive from a features XML descriptor file, normally generated in the same project with the karaf:features-generate-descriptor goal. The features descriptor and all the bundles mentioned in it are installed in this directory.

For setting up the values for step 3, you can also refer to the additional info section at the end of the note. Bounce the database. Bounce the application server. To set the minimum authentication protocol allowed for clients, and when a server is acting as a client, such as connecting over a database link, when connecting to Oracle Database instances.

If the version does not meet or exceed the value defined by this parameter, then authentication fails with an ORA No matching authentication protocol error. To set the minimum authentication protocol allowed when connecting to Oracle Database instances. If the client version does not meet or exceed the value defined by this parameter, then authentication fails with an ORA No matching authentication protocol error or an ORA Connections to this server version are no longer supported error.

A greater value means the server is less compatible in terms of the protocol that clients must understand in order to authenticate. The server is also more restrictive in terms of the password version that must exist to authenticate any specific account. Note the following implications of setting the value to 12 or 12a:.

To take advantage of the password protections introduced in Oracle Database 11g, users must change their passwords. The new passwords are case sensitive. When an account password is changed, the earlier 10G case-insensitive password version is automatically removed. It may be necessary to reset the password for that account. Note the following implication of setting the value to 12a:.

When an account password is changed, the earlier 10G case-insensitive password version and the 11G password version are both automatically removed. The client must support certain abilities of an authentication protocol before the server will authenticate.

If the client does not support a specified authentication ability, then the server rejects the connection with an ORA No matching authentication protocol error message. The following is the list of all client abilities. Some clients do not have all abilities. Clients that are more recent have all the capabilities of the older clients, but older clients tend to have less abilities than more recent clients.

O5L: The ability to perform the Oracle Database 10g authentication protocol using the 10G password version. O4L: The ability to perform the Oracle9i database authentication protocol using the 10G password version.

O3L: The ability to perform the Oracle8i database authentication protocol using the 10G password version. A higher ability is more recent and secure than a lower ability. Clients that are more recent have all the capabilities of the older clients. Clients using releases earlier than Oracle Database release Footnote 1 This is considered "Exclusive Mode" because it excludes the use of both 10G and 11G password versions.

Footnote 2 This is considered "Exclusive Mode" because it excludes the use of the 10G password version. See the following: Note JDBC - Version The Network Adapter could not establish the connection Driver java. Driver at org. Sqoop : Got exception running Sqoop : java. RuntimeException : Could n ot load db driv er class : oracle. RuntimeException : Could n ot load db driv er class : co m. Driv er java. RuntimeException : Could n ot load db driv er ERROR sqoop.

RuntimeException : Could n ot load db driv er. Runtime Except r ion. RuntimeException : java.



0コメント

  • 1000 / 1000