To Login :
curl -X POST -c cookies http://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin'
To create the API:
curl -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-add/ajax/add.jag -d 'action=addAPI&name=PizzaShack3&visibility=public&version=1.0.0&description=New API&endpointType=secured&http_checked=&https_checked=https&wsdl=&tags=automated published&tier=Gold&thumbUrl=https://pbs.twimg.com/profile_images/493735622587064320/z7qZUG0E_bigger.png&context=/pizza3&tiersCollection=Unlimited&resourceCount=0&resourceMethod-0=GET&resourceMethodAuthType-0=Application,Application User&resourceMethodThrottlingTier-0=Unlimited&uriTemplate-0=/assignments' -d 'endpoint_config={"production_endpoints":{"url":"http://localhost:8080/pizzashack-api-1.0.0","config":{"format":"leave-as-is","optimize":"leave-as-is","suspendErrorCode":["101505"],"suspendDuration":0,"suspendMaxDuration":0,"factor":1,"actionSelect":"fault","actionDuration":30000}},"endpoint_type":"http"}'
To publish the API :
curl -X POST -b cookies 'http://localhost:9763/publisher/site/blocks/life-cycles/ajax/life-cycles.jag' -d 'action=updateStatus&name=PizzaShack3&version=1.0.0&provider=admin&status=PUBLISHED&publishToGateway=true&requireResubscription=true'
Kaushie's blog
Wednesday, September 23, 2015
Sunday, January 18, 2015
Testing WSO2 BAM 2.5.0 Kafka Input Event Adaptor
WSO2 BAM 2.5.0 now supports processing data streams based on the Kafka Event Adaptor.
Apache Kafka is a fast, scalable and distributed publish-subscribe messaging system.
It maintains topics which contain message feeds. These messages are written to topics by Producers and read by Consumers.
These topics are partitioned and replicated across multiple nodes, thereby making Kafka a distributed system.
Let's see how to configure a Kafka based input adapter in WSO2 BAM 2.5.0 and capture attributes from a message published to a topic on Kafka by WSO2 BAM 2.5.0
Setting up Kafka:
Kafka can be downloaded from here
Once downloaded unzip the distribution as follows.
tar xvf kafka_2.10-0.8.1.1.tgz
Navigate to the folder unzip file was extracted to as follows
cd kafka_2.10-0.8.1.1/
Execute the following command to start the zookeeper server.
bin/zookeeper-server-start.sh config/zookeeper.properties
Then open another console, navigate to the Kafka folder and execute the following command to start the Kafka server
bin/kafka-server-start.sh config/server.properties
Now open another console, navigate to the Kafka folder and execute the following command to create a topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafkaTestTopic1
The topic name given here is 'kafkaTestTopic'
A producer needs to be started to send messages to the created topic.
Therefore navigate to open another console and navigate to the Kafka folder and execute the following command.
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic kafkaTestTopic1
Setting up WSO2 BAM 2.5.0:
Download and extract WSO2 BAM 2.5.0 from here
Copy the following .jar files at the <Kafka_Home>/lib to <BAM_HOME>/repository/components/lib
kafka_2.10-0.8.1.1.jar
scala-library-2.10.1.jar
zkclient-0.3.jar
zookeeper-3.3.4.jar
Navigate to <BAM_HOME>/bin and start the server as follows.
sh wso2server.sh
Log into the Management console of BAM and navigate to Configure-->Event Processor Configs --> Input Event Adaptors
Click on 'Add Input Event Adaptor', specify the input adaptor details as follows and create an input adaptor.
Next navigate to Main --> Event processor --> Event streams.
Click on 'Add Event Stream' and specify an event stream to capture the data required.
Specify the payload attributes and their types to be captured.
Click on 'Add Event Stream' and specify the option 'Custom Event Builder' in the event builder options that appear.
Specify the event builder configurations as follows and add an event builder though the pop up window that appears.
When specifying the event builder configurations you need to add the name of the topic to be listened to.
I have added here the topic created under Kafka configurations ie. kafkaTestTopic1
The input mapping type is specified as json.
Now it is required to send a message to the topic.
Go back to the producer console started and copy paste the following json string.
{"event": { "payloadData": {"kafkaAtt1": "4","kafkaAtt2": "TestString"}}}
Once this is done, the values of attributes specified in the kafkaEventStream will be captured and and entry will be made in the stream kafkaEventStream under in the Cassandra Key space EVENT_KS.
To view this navigate to Tools --> Cassandra Explorer --> Connect to Cluster
and specify the Cassandra connection details.
Once connected the you will be able to see the kafkaEventStream under EVENT_KS.
Click on 'kafkaEventStream'.
The entry made for the captured data will be displayed.
Click on 'View more' option.
A details version of the stream entry will be displayed.
You can see the captured attributes and their values.
Monday, September 29, 2014
Verifying entitlement caching with WSO2 API Manager 1.7.0
The API invocation flow with XACML is as follows
1 Request is received by the Gateway(APIM)
2. Token is validated by the Key Manager (APIM) and the validation results are sent back to the Gateway.
3. If the token is valid, the entitlement mediator will call the identity server for XACML policy evaluation.
4. If the result of the policy decision is 'permit' the actual back end endpoint will be invoked.
When you enable response caching for an API, the cache mediator will be engaged,before step 3 and 4 (before calling the Identity Server to get the XACML policy evaluated and the back end endpoint).
This will cache the response and the result of the XACML policy decision.
Setup:
1.Install the following features on API Manager and restart the server.
Features
-XACML
-XACML Mediation
Repository Location : http://dist.wso2.org/p2/carbon/releases/turing/
2. Create an API in the API Publisher and replace the content of the synapse configuration deployed with that of this file
3 In the above created API, I have used the following sample APIs as endpoints instead of calling external endpoints. Therefore copy these files to <AM_HOME>/repository/deployment/server/synapse-configs/default/api folder.
acceptResponse_api.xml
denyResponse_api.xml
4. The following sample xacml policy should be deployed in the Identity server used for entitlement validation.
sample_xacml_policy.xml
Verification:
Step 1:
Enable debug logs for the package 'org.wso2.carbon.identity.entitlement' of the Identity server.
-For this add the following entry in the <IS_HOME>/repository/conf/log4j.properties file and restart the server.
log4j.logger.org.wso2.carbon.identity.entitlement=DEBUG
Step 2:
Create an API by enabling response caching
Subscribe to this API and invoke it.
- This will print the debug logs pertaining to the package enabled at step 1 on the identity server console.
- This implies the initial request made to the identity server for XACML policy evaluation.
- The policy decision and the back end response is cached at the APIM end at this point.
Step 3:
Invoke the API again (without changing the request).
- Since the request is identical to the request made at step 2, the response will be fetched from the response cache at APIM.
- No requests will be made to the Identity server for policy evaluation or to the actual back end.
- Therefore the debug logs observed on the Identity server console at step 2, will not be logged again.
Step 4 (Optional):
Repeat step 1 and 2 after changing a request parameter.
- Since the request is different (due to the difference in request parameters) you will be able to observe the above mentioned debug log on the Identity Server for the first invocation.
- But not for the second invocation as the response will be fetched from the cache.
1 Request is received by the Gateway(APIM)
2. Token is validated by the Key Manager (APIM) and the validation results are sent back to the Gateway.
3. If the token is valid, the entitlement mediator will call the identity server for XACML policy evaluation.
4. If the result of the policy decision is 'permit' the actual back end endpoint will be invoked.
When you enable response caching for an API, the cache mediator will be engaged,before step 3 and 4 (before calling the Identity Server to get the XACML policy evaluated and the back end endpoint).
This will cache the response and the result of the XACML policy decision.
Setup:
1.Install the following features on API Manager and restart the server.
Features
-XACML
-XACML Mediation
Repository Location : http://dist.wso2.org/p2/carbon/releases/turing/
2. Create an API in the API Publisher and replace the content of the synapse configuration deployed with that of this file
3 In the above created API, I have used the following sample APIs as endpoints instead of calling external endpoints. Therefore copy these files to <AM_HOME>/repository/deployment/server/synapse-configs/default/api folder.
acceptResponse_api.xml
denyResponse_api.xml
4. The following sample xacml policy should be deployed in the Identity server used for entitlement validation.
sample_xacml_policy.xml
Verification:
Step 1:
Enable debug logs for the package 'org.wso2.carbon.identity.entitlement' of the Identity server.
-For this add the following entry in the <IS_HOME>/repository/conf/log4j.properties file and restart the server.
log4j.logger.org.wso2.carbon.identity.entitlement=DEBUG
Step 2:
Create an API by enabling response caching
Subscribe to this API and invoke it.
- This will print the debug logs pertaining to the package enabled at step 1 on the identity server console.
- This implies the initial request made to the identity server for XACML policy evaluation.
- The policy decision and the back end response is cached at the APIM end at this point.
Step 3:
Invoke the API again (without changing the request).
- Since the request is identical to the request made at step 2, the response will be fetched from the response cache at APIM.
- No requests will be made to the Identity server for policy evaluation or to the actual back end.
- Therefore the debug logs observed on the Identity server console at step 2, will not be logged again.
Step 4 (Optional):
Repeat step 1 and 2 after changing a request parameter.
- Since the request is different (due to the difference in request parameters) you will be able to observe the above mentioned debug log on the Identity Server for the first invocation.
- But not for the second invocation as the response will be fetched from the cache.
Thursday, April 17, 2014
Guidelines for configuring WSO2 API-Manager workflows in a clustered environment
1) Work flow server URLs of site.json file should be updated with the
correct port of the Business Process Server considering its port
offset.(Work flow related configuration files by default contain port
values assuming that the BPS port offset is 2.)
If BPS and API-Manager are required to be pointed to the same user store, the workflow admin of the publisher node can be used, thus eliminating the need for a dedicated workflow node.
Since in a typical scenario the workflow admin will be from the same user store as APIM, we can use the workflow admin residing in the publisher node instead of having it separately.
Publisher node is recommended to be used here since it is an administrative level task and the publisher node is meant to reside within a private network.
In this case the URLs of <APIM_PUBLISHER_HOME>/ repository/deployment/server/ jaggeryapps/admin-dashboard/ site/conf/site.json need to be updated.
If work flow admins are not from the API-Manager user store, have a seperate node for the workflow admin.
A dedicated node is only required if workflow admin users reside in a separate user store. In this case APIM and BPS will be pointed to different user store.
In this case the URLs of <APIM_WORKFLOW_HOME>/ repository/deployment/server/ jaggeryapps/admin-dashboard/ site/conf/site.json need to be updated.
If a workflow admin user role needs to be defined, add it under 'allowedRoles'.
eg: "allowedRoles":"wfadmin"
Once this change is done, only users with the given role will be allowed to log in to the work flow admin dashboard.
3) Make the following changes in the .epr files of <BPS_HOME>/repository/conf/epr
Change the following in case the default admin user has been changed.
<authorization-username>
<authorization-password>
The WorkFlowCallBackService endpoints of the following files should be pointed to the gateway :
<BPS_HOME>/repository/conf/epr/ApplicationCallbackService.epr
<BPS_HOME>/repository/conf/epr/RegistrationCallbackService.epr
<BPS_HOME>/repository/conf/epr/SubscriptionCallbackService.epr
<BPS_HOME>/repository/conf/epr/UserSignupProcess.epr
The Service endpoints of the following files should be pointed to the Business Process Server :
<BPS_HOME>/repository/conf/epr/ApplicationService.epr
<BPS_HOME>/repository/conf/epr/RegistrationService.epr
<BPS_HOME>/repository/conf/epr/SubscriptionService.epr
<BPS_HOME>/repository/conf/epr/ UserSignupService.epr
4) Update the port of the WSDL files of <API_MANAGER_HOME>/business-processes/<relevent workflow>/HumanTask with the correct port of the Business Process Server.
5) Upload the HumanTasks located in <API_MANAGER_HOME>/business-processes/<Relevent workflow>/HumanTask to the Business Process Server (Main -> Manage -> Human Tasks).
Alternatively you can copy it to <BPS_HOME>/repository/deployment/server/humantasks folder.
6) Upload the BPEL processes located in <API_MANAGER_HOME>/business-processes/<Relevent workflow>/BPEL to the Business Process Server. (Main -> Manage -> Processes).
Alternatively you can copy it to <BPS_HOME>/repository/deployment/server/bpel folder.
7) Point the endpoint of proxy service <APIM_GATEWAY_HOME>repository/deployment/server/synapse-configs/default/proxy-services/WorkflowCallbackService.xml of the gateway node to the 'Store' node of the cluster.
This proxy service is used to convert SOAP messages received by the Business Process Server (which is unable to send JSON messages directly) to JSON, in order to call a rest endpoint.
8) Enable the executer relevant to the required workflow in the following file, by logging into the management console (the node does not matter since the governance registry is shared)
Main -> Resources -> _system --> governance -> apimgt -> applicationdata -> workflow-extensions.xml.
Point the 'serviceEndpoint' to the Business Process Server.
Point the 'callbackURL' to the Gateway Node of the cluster.
The reason for pointing the callbackURL to the gateway node here is the fact that proxy service changes (eg: Security policy) are usually done in the gateway node. The store node does not get updated when such changes are concerned.
If BPS and API-Manager are required to be pointed to the same user store, the workflow admin of the publisher node can be used, thus eliminating the need for a dedicated workflow node.
Since in a typical scenario the workflow admin will be from the same user store as APIM, we can use the workflow admin residing in the publisher node instead of having it separately.
Publisher node is recommended to be used here since it is an administrative level task and the publisher node is meant to reside within a private network.
In this case the URLs of <APIM_PUBLISHER_HOME>/
If work flow admins are not from the API-Manager user store, have a seperate node for the workflow admin.
A dedicated node is only required if workflow admin users reside in a separate user store. In this case APIM and BPS will be pointed to different user store.
In this case the URLs of <APIM_WORKFLOW_HOME>/
If a workflow admin user role needs to be defined, add it under 'allowedRoles'.
eg: "allowedRoles":"wfadmin"
Once this change is done, only users with the given role will be allowed to log in to the work flow admin dashboard.
2) Copy <APIM_HOME>/business-processes/epr folder to <BPS_HOME>/repository/conf
3) Make the following changes in the .epr files of <BPS_HOME>/repository/conf/epr
Change the following in case the default admin user has been changed.
<authorization-username>
<authorization-password>
The WorkFlowCallBackService endpoints of the following files should be pointed to the gateway :
<BPS_HOME>/repository/conf/epr/ApplicationCallbackService.epr
<BPS_HOME>/repository/conf/epr/RegistrationCallbackService.epr
<BPS_HOME>/repository/conf/epr/SubscriptionCallbackService.epr
<BPS_HOME>/repository/conf/epr/UserSignupProcess.epr
The Service endpoints of the following files should be pointed to the Business Process Server :
<BPS_HOME>/repository/conf/epr/ApplicationService.epr
<BPS_HOME>/repository/conf/epr/RegistrationService.epr
<BPS_HOME>/repository/conf/epr/SubscriptionService.epr
<BPS_HOME>/repository/conf/epr/ UserSignupService.epr
4) Update the port of the WSDL files of <API_MANAGER_HOME>/business-processes/<relevent workflow>/HumanTask with the correct port of the Business Process Server.
5) Upload the HumanTasks located in <API_MANAGER_HOME>/business-processes/<Relevent workflow>/HumanTask to the Business Process Server (Main -> Manage -> Human Tasks).
Alternatively you can copy it to <BPS_HOME>/repository/deployment/server/humantasks folder.
6) Upload the BPEL processes located in <API_MANAGER_HOME>/business-processes/<Relevent workflow>/BPEL to the Business Process Server. (Main -> Manage -> Processes).
Alternatively you can copy it to <BPS_HOME>/repository/deployment/server/bpel folder.
7) Point the endpoint of proxy service <APIM_GATEWAY_HOME>repository/deployment/server/synapse-configs/default/proxy-services/WorkflowCallbackService.xml of the gateway node to the 'Store' node of the cluster.
This proxy service is used to convert SOAP messages received by the Business Process Server (which is unable to send JSON messages directly) to JSON, in order to call a rest endpoint.
8) Enable the executer relevant to the required workflow in the following file, by logging into the management console (the node does not matter since the governance registry is shared)
Main -> Resources -> _system --> governance -> apimgt -> applicationdata -> workflow-extensions.xml.
Point the 'serviceEndpoint' to the Business Process Server.
Point the 'callbackURL' to the Gateway Node of the cluster.
The reason for pointing the callbackURL to the gateway node here is the fact that proxy service changes (eg: Security policy) are usually done in the gateway node. The store node does not get updated when such changes are concerned.
Wednesday, February 26, 2014
Enabling log4jdbc to verify key caching funcionality in WSO2 API-Manager
1. Configure API-Manager database (WSO2AM_DB) of WSO2 API-Manager with MYSQL
(http://docs.wso2.org/display/AM160/Setting+up+with+MySQL)
2. Place the log4j driver in <KEY_MANAGER_HOME>/repository/components/lib
(https://log4jdbc.googlecode.com/files/log4jdbc4-1.2beta2.jar)
3. Append the following to the <KEY_MANAGER_HOME>/repository/conf/log4j.properties file.
! Log all JDBC calls except for ResultSet calls
! Log timing information about the SQL that is executed.
log4j.logger.jdbc.sqltiming=DEBUG,sqltiming
log4j.additivity.jdbc.sqltiming=false
! the appender used for the JDBC API layer call logging above, sql timing
log4j.appender.sqltiming=org.apache.log4j.FileAppender
log4j.appender.sqltiming.File=./repository/logs/sqltiming.log
log4j.appender.sqltiming.Append=false
log4j.appender.sqltiming.layout=org.apache.log4j.PatternLayout
log4j.appender.sqltiming.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n
This configures the following:
- Log level (DEBUG) : Captures all debug level logs of package jdbc.sqltiming
- Appender (sqltiming) : Uses a FileAppender where all logs are logged in the file specified in property 'log4j.appender.sqltiming.File'
- Layout : Format to use when logging
4. Replace the following in the WSO2AM_DB datasource of located in file :
<KEY_MANAGER_HOME>/repository/conf/datasources/master-datasources.xml
- Append 'jdbc:log4j' to the beginning of the content of <url> element.
- Change the driver class to the following:
log4jdbc driver uses log4j to log messages. When 'jdbc:log4j' is appended to the url, the service calls pass through the log4jdbc driver where they are logged and passed over to the mysql driver.
Once the above changes are done your WSO2AM_DB datasource should look like this.
<name>WSO2AM_DB</name>
<description>Datasource for AM database</description>
<jndiConfig>
<name>jdbc/WSO2AM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:log4jdbc:mysql://localhost:3306/WSO2AM_DB?
<username>wso2carbon</
<password>wso2carbon</
<driverClassName>net.sf.
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</
</configuration>
</definition>
</datasource>
Now logging is enabled.
To verify key manager caching :
1. Open <KEY_MANAGER_HOME>/repository/conf/api-manager.xml file and enable/disable <EnableKeyMgtValidationInfoCache> as required.
2. Open <GATEWAY_HOME>/repository/conf/api-manager.xml file and disable <EnableGatewayKeyCache>
3. Re-start both Gateway & Key Manager nodes.
4. Copy <KEY_MANAGER_HOME>/repository/logs/sqltiming.log & save under a different name.
cp sqltiming.log sqltiming.log.1
5. Invoke the desired API.
6. Take another copy of sqltiming.log and save under a new name.
cp sqltiming.log sqltiming.log.2
7. Verify the database calls by checking the difference between logs.
diff sqltiming.log.1 sqltiming.log.2
If KM caching is enabled : The access token related database call should be logged only once until the cache expires.
If KM caching is disabled: The access token related database call should be logged everytime the API is invoked.
To verify gateway caching :
Gateway caching can be tested by verifying whether database calls are logged on the Key Manager node when gateway caching is enabled/disabled.
1. Open <KEY_MANAGER_HOME>/repository/conf/api-manager.xml file and disable <EnableKeyMgtValidationInfoCache>
2. Open <GATEWAY_HOME>/repository/conf/api-manager.xml file and enable/disable <EnableGatewayKeyCache> as required.
3. Re-start both Gateway & Key Manager nodes.
4. Copy <KEY_MANAGER_HOME>/repository/logs/sqltiming.log & save under a different name.
cp sqltiming.log sqltiming.log.1
5. Invoke the desired API.
6. Take another copy of sqltiming.log and save under a new name.
cp sqltiming.log sqltiming.log.2
7. Verify the database calls by checking the difference between logs.
diff sqltiming.log.1 sqltiming.log.2
If GW caching is enabled: The access token related database call should be logged only once until the cache expires. (In this scenario only the first invocation hits the key manager. The sub subsequent invocations use the token related data in the gateway cache)
If GW caching is disabled: The access token related database call should be logged everytime the API is invoked. (In this scenario every invocation hits the key manager as token related data are not cached on the gateway)
Eg:
2014-09-30 11:49:15.013 org.wso2.carbo
3. SELECT IAT.VALIDITY_PE
IAT.TIME_CREATE
APP.APPLICATION
IAT, AM_SUBSCRIPTION SUB, AM_SUBSCRIBER SUBS, AM_APPLICATION APP, AM_APPLICATION_
AKM, AM_API API WHERE IAT.ACCESS_TOKE
= '/echo' AND API.API_VERSION = '1.0.0' AND IAT.CONSUMER_KE
= APP.APPLICATION
AND AKM.APPLICATION
Subscribe to:
Posts (Atom)