Tuesday, December 20, 2016

Working with cloud instances in WSO2

Hi all,

I had a chance to work with cloud instances in WSO2 environment. There are four instances which are created in ubuntu environment. I have had cluster set up of four nodes in cloud.

In this blog I am not going to explain the cluster setup. Here there will be a brief explanation of cloud instances and how can we handle it.

When we are creating cloud instance we will get a key for that. After that we need to change mode of the file by following command.

    change key file permission : chmod 600 <file>

Then we can  start the instance by specifying the IP address of the node. 

    ssh -i <key file> ubuntu@<IP>

When we starting it, it will ask passpharase for key. we need to give that value.

Now we have started the cloud instance.

The cloud instance will look like pure computer when we buy it (Except OS might be installed). So we need to all other things from the terminal.
  • Command to download from a web link.

    sudo apt wget <link>
  • To install unzip

    sudo apt get unzip
  • Need to install JAVA as same as installing JAVA in ubuntu [1]
  • If you want to copy from local machine to cloud instance, you can use sftp.

    Need to start sftp in the cloud instance.

              sftp -i <key file> ubuntu@<IP>

    Copy file

              put <FROM> <TO>

References

 [1] https://www.digitalocean.com/community/tutorials/how-to-install-java-on-ubuntu-with-apt-get

Friday, October 14, 2016

RabbitMQ SSL connection Using RabbitMQ Event Adpter in WSO2 CEP- Part 2

stHi all,


This is the continuous blog of this blog. In the previous blog we saw about rabbitMQ broker side configuration. Here we are going to see about specifying CEP side Configuration. RabbitMQ Receiver is an extension for the CEP. So we need to download it and put it into the CEP. Those are given below.

Steps 
  1.  Please visit this link to download RabbitMQ Receiver extension.
  2. Download rabbitmq client jar from this link. 
  3. Place the extension .jar file into the <CEP_HOME>/repository/components/dropins directory which you downloaded in step 1.
  4. Place the rabbitmq client .jar file into the <CEP_HOME>/repository/components/lib directory which you downloaded in step 2.
  5. Start the CEP Server by going to directory <CEP_HOME>/bin in terminal and giving command like sh ./wso2server.sh
  6. Create a Stream in CEP. Refer more here.
  7. Go to Main => Manage => Receiver to open the Available Receivers page and then click Add Event Receiver in the management console. You can get the management console by visiting the https://172.17.0.1:9443/carbon
  8. Select Input Event Adapter Type as rabbitmq.
  9. Then you will get a page like below.


   9. Then you need to specify values for the properties given above. Here I have mentioned important and  mandatory properties' values.

  • Host Name: localhost
  • Host Port: 5671
  • Username: guest
  • Password: guest
  • Queue Name: testsslqueue
  • Exchange Name: testsslexchange
  • SSL Enabled: True
  • SSL Keystore Location: ../client/keycert.p12
  • SSL Keystore Type: PKCS12
  • SSL Keystore Password: MySecretPassword
  • SSL Truststore Location: ../rabbitstore (These are the values we took from the part 1 blog step 5)
  • SSL Truststore Type: JKS
  • SSL Truststore Password: rabbitstore (The password is the one we used when we are key tool command in Step 5)
  • Connection SSL Version: SSL
After specifying above values, select the stream you had add in step 3 and save it. 

Then if there is a message in the queue you mentioned in RabbitMQ broker, you will get it into the CEP. The data from the queue can be used in further in Execution plan for processing and we can publish it.

Tuesday, September 6, 2016

Introduction to WSO2 BPS

We can use WSO2 BPS for efficient business Process Management by allowing easy deploy of business processes written using either WS-BPEL standard or BPMN 2.0 standard.

Business Process: Collection of related and structured activities or tasks, that meants a business use case and produces a specific service or output. A process may have zero or more well-defined inputs and an output.

Process Initiator: The person or Apps who initiates business process.

Human task: In here human interaction is involved in business process.

BPEL: It is an XML-based language used for the definition and execution of business. Composing of web services and orchestration.

BPMN: Graphical notation of business processes.

Simple BPEL Process Modeling
  • Download BPS product.
  • Go to <BPS_HOME> -> bin through terminal and execute sh ./wso2server.sh command.
  • Install the plug-in with pre-packaged Eclipse - This method uses a complete plug-in installation with pre-packaged Eclipse, so that you do not have to install Eclipse separately. On the WSO2 BPS product page, click Tooling and then download the distribution according to your operating system under the Eclipse JavaEE Mars + BPS Tooling 3.6.0 section.
  • Then we need to create carbon Composite Application Project. These steps clearly mentioned in https://docs.wso2.com/display/BPS360/BPEL+Guide
 

Friday, September 2, 2016

JAVA Flight Recorder with WSO2 Products

Brief JAVA Flight Recorder

A profiling and event collection framework built into the Oracle JDK. It gathers low level information about the JVM and application behavior without performance impact (less than 2%). With Java Flight Recorder system administrators and developers have a new way to diagnose production issues. JFR provides a way to collect events from a Java application from the OS layer, the JVM, and all the way up to the Java application. The collected events include thread latency events such as sleep, wait, lock contention, I/O, GC, and method profiling. JFR can be enabled by default and continuously collect low level data from Java applications in production environments. This will allow a much faster turnaround time for system administrators and developers when a production issue occurs. Rather than turning on data gathering after the fact, the continuously collected JFR data is simply written to disk and the analysis can be done on data collected from the application leading up to the issue, rather than data collected afterwards. We can use this JFR to do long run testing in effective way.


Here we will see using JFR to do long run testing to WSO2 products through JMC (JAVA Mission Control). Here I am taking WSO2 CEP as WSO2 Product.
  •  Go to <CEP_HOME>/bin and open wso2server.sh using gedit.
  • In that you can see $JAVACMD. Below that add these to enable JFR
    •  -XX:+UnlockCommercialFeatures \
    •   -XX:+FlightRecorder \
  • Start WSO2 CEP by using sh ./wso2server.sh in terminal.
  • Then start JMC by clicking on that. You can see this org.wso2.carbon.bootstrap.Bootstrap in that UI like below.

  • Right click on org.wso2.carbon.bootstrap.Bootstrap and click on start Flight Recording. Following window will be appear.


  • Then go to next steps and do your  relevant changes and click on Finish.
  • After specific time you mentioned you will get a .jfr file you can see the memory growth, cpu usage..... in that.
  • You can open this .jfr file anytime.

    Thanks all :)

Thursday, August 18, 2016

RabbitMQ SSL connection Using RabbitMQ Event Adpter in WSO2 CEP- Part 1

Hi all,

I am writing this blog by assuming you are familiar with RabbitMQ Broker. Here I mainly focus on Securing the connection between RabbirMQ Message Broker and WSO2 CEP. That means how to receive secure messages from RabbitMQ Broker from WSO2 CEP Receiver. The use case here going to explain is, CEP is going to act as a consumer and consumes messages from RabbitMQ server. So simply CEP will act as a Client and RabbitMQ Server Will act as the Server.

 Introduction to RabbitMQ SSL connection

In normal connection we send messages without secure. But some confidential information like credit card number, we can not send without secure. For that purpose, we use SSL. SSL stands for Secure Sockets Layer. SSL allows sensitive information to be transmitted securely. This layer ensures that all data passed between the server and client remain private and integral. SSL is an industry standard. SSL is a security protocol. Protocols describe how algorithms should be used. In this case, the SSL protocol determines variables of the encryption for both the link and the data being transmitted.

Steps 

  1.  As First Step we need to create own certificate Authority.
    • For that in terminal and go to specific folder (location) by using cd command.
    • Then use below commands.
      • $ mkdir testca
      • $ cd testca
      • $ mkdir certs private
      • $ chmod 700 private
      • $ echo 01 > serial
      • $ touch index.txt
    • Then create a new file using the following command, inside the tesca directory.  
      • $ gedit openssl.cnf

        When we using this commanda file will open in gedit. So copy and paste following thing and save it.

        [ ca ]
        default_ca = testca
        
        [ testca ]
        dir = .
        certificate = $dir/cacert.pem
        database = $dir/index.txt
        new_certs_dir = $dir/certs
        private_key = $dir/private/cakey.pem
        serial = $dir/serial
        
        default_crl_days = 7
        default_days = 365
        default_md = sha256
        
        policy = testca_policy
        x509_extensions = certificate_extensions
        
        [ testca_policy ]
        commonName = supplied
        stateOrProvinceName = optional
        countryName = optional
        emailAddress = optional
        organizationName = optional
        organizationalUnitName = optional
        
        [ certificate_extensions ]
        basicConstraints = CA:false
        
        [ req ]
        default_bits = 2048
        default_keyfile = ./private/cakey.pem
        default_md = sha256
        prompt = yes
        distinguished_name = root_ca_distinguished_name
        x509_extensions = root_ca_extensions
        
        [ root_ca_distinguished_name ]
        commonName = hostname
        
        [ root_ca_extensions ]
        basicConstraints = CA:true
        keyUsage = keyCertSign, cRLSign
        
        [ client_ca_extensions ]
        basicConstraints = CA:false
        keyUsage = digitalSignature
        extendedKeyUsage = 1.3.6.1.5.5.7.3.2
        
        [ server_ca_extensions ]
        basicConstraints = CA:false
        keyUsage = keyEncipherment
        extendedKeyUsage = 1.3.6.1.5.5.7.3.1
        
        • Now we can generate the key and certificates that our test Certificate Authority will use. Still within the testca directory:
          $ openssl req -x509 -config openssl.cnf -newkey rsa:2048 -days 365 -out cacert.pem -outform PEM -subj /CN=MyTestCA/ -nodes
          $ openssl x509 -in cacert.pem -out cacert.cer -outform DER

  2.  Generating certificate and key for the Server
    • Apply following commands. (Assuming you are still in testca folder)
      • $ cd ..
        $ ls
        testca
        $ mkdir server
        $ cd server
        $ openssl genrsa -out key.pem 2048
        $ openssl req -new -key key.pem -out req.pem -outform PEM -subj /CN=$(hostname)/O=server/ -nodes
        $ cd ../testca
        $ openssl ca -config openssl.cnf -in ../server/req.pem -out ../server/cert.pem -notext -batch -extensions server_ca_extensions
        $ cd ../server
        $ openssl pkcs12 -export -out keycert.p12 -in cert.pem -inkey key.pem -passout pass:MySecretPassword
         
  3.  Generating certificate and key for the client
    •  Apply following commands.
      • $ cd ..
        $ ls
        server testca
        $ mkdir client
        $ cd client
        $ openssl genrsa -out key.pem 2048
        $ openssl req -new -key key.pem -out req.pem -outform PEM -subj /CN=$(hostname)/O=client/ -nodes
        $ cd ../testca
        $ openssl ca -config openssl.cnf -in ../client/req.pem -out ../client/cert.pem -notext -batch -extensions client_ca_extensions
        $ cd ../client
        $ openssl pkcs12 -export -out keycert.p12 -in cert.pem -inkey key.pem -passout pass:MySecretPassword
         
  4. Configuring RabbitMQ Server
    To enable the SSL support in RabbitMQ, we need to provide to RabbitMQ the location of the root certificate, the server's certificate file, and the server's key. We also need to tell it to listen on a socket that is going to be used for SSL connections, and we need to tell it whether it should ask for clients to present certificates, and if the client does present a certificate, whether we should accept the certificate if we can't establish a chain of trust to it.

    For that we need to create file inside "/etc/rabbitmq". You have to name the file as "rabbitmq.config". Inside the file copy and paste following configuration.

    [
      {rabbit, [
         {ssl_listeners, [5671]},
         {ssl_options, [{cacertfile,"/path/to/testca/cacert.pem"},
                        {certfile,"/path/to/server/cert.pem"},
                        {keyfile,"/path/to/server/key.pem"},
                        {verify,verify_peer},
                        {fail_if_no_peer_cert,false}]}
       ]}
    ].
  5. Trust the Client's Root CA
    Use the following command.
    $ cat testca/cacert.pem >> all_cacerts.pem
     
    To validating certificate use below command.
     
    keytool -import -alias server1 -file /path/to/server/cert.pem -keystore /path/to/rabbitstore

    If you want study more about this configuration, go to this link. Now we have finished configuration in server side and created certificates. In my next blog, I will continue this blog by specifying CEP side Configuration. :)

Monday, July 18, 2016

Working with WSO2 RabbitMQ Inbound

I am writing this blog by considering you already have knowledge about inbound and RabbitMQ. If not please refer my previous blogs. It has brief idea about those.

In WSO2 RabbitMQ inbound, we use inbound as RabbitMQ consumer. It is event based inbound which polls only once to establish a connection with the remote server and then consumes events.  In here AMQP messaging protocol is used. AMQP is a wire-level messaging protocol that describes the format of the data that is sent across the network. If a system or application can read and write AMQP, it can exchange messages with any other system or application that understands AMQP regardless of the implementation language.

Steps to enable RabbitMQ Inbound 
  • Start the WSO2 Server by going <ESB_HOME>/bin by specifying this command sh ./wso2server.sh
  • Create  two simple sequences.  In my case I have created simple sequence names as TestIn and amqpErrorSeq with following configuration
    <?xml version="1.0" encoding="UTF-8"?>
    <sequence name="TestIn" onError="amqpErrorSeq" xmlns="http://ws.apache.org/ns/synapse">
        <log level="full"/>
        <drop/>
    </sequence>



    <?xml version="1.0" encoding="UTF-8"?>
    <sequence name="amqpErrorSeq" xmlns="http://ws.apache.org/ns/synapse">
        <log level="full"/>
        <drop/>
    </sequence>
  •  Go to inbound endpoints in Service Bus menu.
  • Click on Add Inbound endpoint. you will see the page like below.

  • Enter the name of the endpoint and select type as rabbitmq. Click on Next.
  • In the next page specify 

    • Sequence as TestIn.
    • Error Sequence as amqpErrorSeq. 
    • Suspend as false. We need to put as false to make inbound in enable mode. if we put true, the inbound is disabled.
    • Sequential as true.
    • Coordination as true.
    • rabbitmq.connection.factory as AMQPConnectionFactory
    • rabbitmq.server.host.name as localhost
    • rabbitmq.server.port as 5672. Here management console of rabbitmq is 15672. But here we have to mention as 5672.
    • rabbitmq.server.user.name as guest (In my case i am using guest account).
    • rabbitmq.server.password as guest (In my case i am using guest account).
    • rabbitmq.queue.name : specify a queue name.
  • rabbitmq.exchange.name : specify exchange name.
  • If you want to change anything in advanced options, click it and change it.
Thats all for the RabbitMq inbound. That means we have created consumer. For consuming events, we need to provide events. so i have created a sample java producer. That was given below. You can see the output of the message in ESB started terminal.

import com.rabbitmq.client.AMQP;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;

public class Producer {
    private final static String QUEUE_NAME = "queue";
    private final static String contentEncoding = "utf-8";
    public static void main(String[] argv) throws Exception {
        ConnectionFactory factory = new ConnectionFactory();
        factory.setHost("localhost");
        factory.setUsername("guest");
        factory.setPassword("guest");
        factory.setPort(5672);
        Connection connection = factory.newConnection();
        Channel channel = connection.createChannel();

        channel.queueDeclare(QUEUE_NAME, false, false, false, null);
        channel.exchangeDeclare("exchange", "direct", true);
        channel.queueBind(QUEUE_NAME, "exchange", "route");
        String message ="<m:placeOrder xmlns:m=\"http://services.samples\">" +
                "<m:order>" +
                "<m:price>100</m:price>" +
                "<m:quantity>20</m:quantity>" +
                "<m:symbol>RMQ</m:symbol>" +
                "</m:order>" +
                "</m:placeOrder>";
        // Populate the AMQP message properties
        AMQP.BasicProperties.Builder builder = new AMQP.BasicProperties().builder();
        builder.contentType("application/xml");
        builder.contentEncoding(contentEncoding);

        // Publish the message to exchange
        channel.basicPublish("exchange", QUEUE_NAME, builder.build(), message.getBytes());
        System.out.println(" [x] Sent '" + message + "'");
        channel.close();
        connection.close();
    }
}

The out is given below.



Wednesday, July 13, 2016

Working with own SOAP Receiver in WSO2 CEP

Hi all,

In this blog we are going to see from the beginning. Here I am doing one example from the start. You can do it same way for your scenario.

  • First we need to create stream. I created stream named as "soapStream". My created stream is given below.

  • Then we need to create a soap event receiver under Manage->Receivers in Management console. I created event receiver names as "soapTest". My created soap event receiver is given below.

  • Then we need to have publisher to view the output to check whether our input is going inside. I created a logger publisher named as "soapTestLogger" to view the output in terminal.

  • Now we have finished our initial steps. As we mentioned transport as http in Soap receiver which we created, we need to give http soap request. Here I used Postman (which is google chrome apps) to send the request. http address we have to use is http://localhost:9763/services/<ReceiverName>/receive. My receiver name is soapTest. so i used like http://localhost:9763/services/soapTest/receive. In header we have to mention Content-Type as application/xml. In body we have to give our events according to the stream. The body part i gave is given below.

    <soap:Envelope
    xmlns:soap="http://www.w3.org/2003/05/soap-envelope/"
    soap:encodingStyle="http://www.w3.org/2003/05/soap-encoding"><soap:Body>
      <events>
        <event>
            <payloadData>
                <ID>70</ID>
                <Name>Test Data</Name>
            </payloadData>
        </event>
    </events>
    </soap:Body>

    </soap:Envelope>
The postman screen shot is given below.


 The output can view in the terminal(CEP started Terminal).

 Now we have finished using soap Receiver with Postman http request. Will see you in next blog. :)

Working with SOAP Receiver Sample in WSO2 CEP

Introduction to WSO2 CEP SOAP Receiver

Events are received by WSO2 CEP server using event receivers, which manage the event retrieval process. WSO2 CEP receives events via multiple transports in JSON, XML, Map, Text, and WSO2Event formats, and converts them into streams of canonical WSO2Events to be processed by the server. SOAP event receiver is used to receive events in XML format via HTTP, HTTPS, and local transports.

Doing the soap sample already available

WSO2 CEP has samples for every Event Receivers it has. You can run the sample very easily like below.
Steps
  • Start the CEP Server sample by going <CEP_HOME>/bin by specifying this command ./wso2cep-samples.sh -sn 0014
    We specify sample number as 0014 because of soap sample receiver is in that. you can view that by going <CEP_HOME>/samples/artifacts/0014/eventreceivers
The receiver is like this.

<?xml version="1.0" encoding="UTF-8"?>
<eventReceiver name="soapReceiver" statistics="disable" trace="disable" xmlns="http://wso2.org/carbon/eventreceiver">
    <from eventAdapterType="soap">
        <property name="transports">all</property>
    </from>
    <mapping customMapping="disable" type="xml"/>
    <to streamName="org.wso2.event.sensor.stream" version="1.0.0"/>
</eventReceiver> 
  • After starting the CEP server, you can see the stream named as "org.wso2.event.sensor.stream" and receiver "soapReceiver" in your CEP management console.
  • As the next step, you need to produce soap events which can be receive by CEP. There is sample java soap producer available in <CEP_HOME>/samples/producers/soap. So you just want to ant build it. For that in terminal, you have to go to above mentioned directory and use below Ant command.

    ant -Durl=http://localhost:9763/services/soapReceiver/receive -Dsn=0014

    Here http://localhost:9763/services/soapReceiver/receive means http location of the receiver. Then we specify the  the sample number. The xml format input data is taken from <CEP_HOME>/samples/artifacts/0014/soapReceiver.txt
  • After this ant command you can see the output in the terminal of CEP started like below.

 Thats All, now you have successfully ran the sample. In my next blog, we will focus on creating own Soap receiver, stream and sending soap request to that. Keep in touch :)




Friday, July 1, 2016

Installing RabbitMQ in Ubunthu

Introduction to RabbitMQ

Initially when I search in Internet i found that RabbitMQ is broker. I also could not understand what they meant by that. After some researches only I could understand. RabbitMQ task is getting the messages and forward it to the receiver. I am sure you are familiar with post office system. In post office post man will get the messages from post box and he will give it to post office. In post office we categories the posts according to the regions and deliver it to relevant places through the postman. This is the procedure of RabbitMQ also. RabbitMQ will act as postman, post office and post box. The thing is post office deals with papers and RabbitMQ deals with blobs of data. A program that sends messages is called as "Producer"(P). Queue is the one which stores messages for a sometime. It is with in RabbitMQ. We can say queue is like post box. A queue is not bound by any limits, it can store as many messages as you like ‒ it's essentially an infinite buffer. Like humans put posts into the post box, a producer can send messages to queue. In this case many consumers can try to receive data from one queue. A "Consumer (C)" is a program that mostly waits to receive messages. Note that the producer, consumer, and broker do not have to reside on the same machine; indeed in most applications they don't. We can write producer and consumer in different languages.

Steps to Install RabbitMQ
  1. You have to check whether RabbitMQ is already in your Ubunthu OS. You can use this code in your terminal.
    sudo rabbitmq-server start

    If yo u get message like "node with name "rabbit" already running on ", then you can know that in your OS, RabbitMQ is already installed. But you can upgrade new version. To upgrade/newly install, you can use following steps.
  2. Execute the following command to add the APT repository.
     sudo add-apt-repository "deb http://www.rabbitmq.com/debian/ testing main"
  3. To avoid warnings about unsigned packages, add our public key to your trusted key list using below command.
    wget https://www.rabbitmq.com/rabbitmq-signing-key-public.asc
  4. After the above command, use this command.
    sudo apt-key add rabbitmq-signing-key-public.asc
  5. Run the following command to update the package list.
    sudo apt-get update
  6.  Install rabbitmq-server package
    sudo apt-get install rabbitmq-server
    In this time, if you had RabbitMQ already, then you could upgrade the version of it. If you don't have it already, new version have installed at this time and start the server automatically.
  7.  You can check whether it start/not by following command.
    sudo rabbitmqctl status
  8. In browser you can type like this http://localhost:15672/In this time if you don't get interface like Figure 1, you have to activate plugins (mentioned in step 9).
  9. Enable plugins.
    sudo rabbitmq-plugins enable rabbitmq_management
  10.  You can use "guest" as username and password.
  11. Yes. I have successfully installed RabbitMQ now. :) In my next blogs, we will focus more on RabbitMQ.

Figure 1






Wednesday, June 29, 2016

Siddhi Extension- Window Extension

Hi all,

Introduction to WSO2 CEP

WSO2 Complex Event Processor (CEP) is a lightweight, easy-to-use, open source Complex Event Processing server (CEP). CEP is used to real time processing of the data. We can give data through the receiver (There are lot of ways to give data to CEP such as HTTP format, Kafka, WSO2 event..), process the stream by writing siddhi queries in execution plan and publish it in different format.

According to the siddhi queries we can process the data which is coming from receiver and publish it. So Siddhi core is the part doing process of data which is more important in real time processing.

Introduction to Siddhi

Siddhi Query Language (SiddhiQL) is designed to process event streams to identify complex event occurrences. We have to add streams for input and output to write execution plans and siddhi queries. the queries is applicable to that stream.

Creating stream for Input
  • Start the CEP server using sh ./wso2server.sh command in terminal
  • Go to https://localhost:9443/carbon
  • In the Management Console, under Manage tag you could see Streams. Click on that.
  • Then you click Add Event Stream. Then you will see following window.

  • In that Specify the name, version, description and Attributes. The data should be given in the order of attributes.
  • Like that define your output stream also.
Writing queries in Execution Plans
  •  Click on Execution plans under the Streaming Analytics in Manage and click on Add New Execution Plan.
  • You could see below window.
  • In that you can import stream (that means your input stream related to receiver) and export stream (that means your output stream related to publisher)
  • After that you could write your siddhi queries in that by referring streams and you can put result in output stream. 
Window Siddhi Extension

Window extension is type of siddhi extension.  Window Extension allows events to be collected and expired without altering the event format based on the given input parameters like the Window operator You can get the archetype of the window extension in here. After cloning this git repo and follow setup.txt file to get your sample project. Specify the class name as your wish when you are creating project from archetype. Then the main class and test case class will be created with wanted classes. The structure of the project is given below.




Here we will see the purposes of methods in SampleWindow.java class.
  • init : This method will be called before other methods. When we writing queries we write parameter value for method.
    For e.g window.unique:length(ip,3). This init method will take that parameter values from the query.
  • process : The main processing method that will be called upon event arrival. The logic of the window extension will be written here. We have to write what kind of processes we have to apply for arriving event.
  • find : To find events from the processor event pool, that the matches the matching Event based on finder logic.
  • constructFinder : To construct a finder having the capability of finding events at the processor that corresponds to the incoming matching Event and the given matching expression logic.
  • start : This will be called only once and this can be used to acquire required resources for the processing element.
  • stop : This will be called only once and this can be used to release the acquired resources for processing.
  • currentState : Used to collect the serialize state of the processing element, that need to be persisted for the reconstructing the element to the same state on a different point of time.
  • restoreState : Used to restore serialized state of the processing element, for reconstructing the element to the same state as if was on a previous point of time.
After implementing this class, you need to write test cases in SampleWindowTestCase.java class. If you write your test cases you could test whether your code logic is correct or not by giving some dataset.

In Sample.siddhiext you have to give a name and the main class which is having the logic like this. SampleWindow=org.wso2.extension.siddhi.window.sample.SampleWindow

Then you have to build the Project using "mvn clean install" in terminal pointing this project's pom.xml. Then you will get jar file in target folder. Take that jar file to <CEP_HOME>/repository/components/dropins. After starting the CEP, you can use query like this 
"from <ImportEventName>#window.Sample:SampleWindow(ip,3)"

There is already I created a siddhi extension called UniqueLength. UniqueLength means according to the given attribute it will give unique events within given length. You can found the source code from here. After cloning, you can build and put it in jar file in <CEP_HOME>/repository/components/dropins and use this query to use this extension functionality.

/* Enter a unique ExecutionPlan */
@Plan:name('ExecutionPlan')

/* Enter a unique description for ExecutionPlan */
-- @Plan:description('ExecutionPlan')

/* define streams/tables and write queries here ... */

@Import('TestUniqueWindowIn:1.0.0')
define stream UniqueIN (timeStamp long, a string, ip string);

@Export('TestUniqueWindowOut:1.0.0')
define stream UniqueOUT (a string, ip string);

from UniqueIN#window.unique:length(ip,3)
select a, ip
insert expired events into UniqueOUT;


Here my siddhiext file name is unique.siddhiext and value in that file is length=org.wso2.extension.siddhi.window.uniquelength.UniqueLengthWindowProcessor


Hope you have got at least basic idea of Window extension. Have a nice day. :)

Tuesday, May 24, 2016

Streaming File send with File Inbound/File Connector

Hi all,

I think you are familiar with connector, streaming and VFS transfer. If not please refer my previous blogs. As I mentioned in the previous blog we can do the vfs sending feature by using file connector and file inbound. File connector is quite simple to use compared to the vfs sender for various file operations. Before that we will get some knowledge about inbound.

Brief intro to inbound endpoint

Inbound is a feature in WSO2 ESB. An inbound endpoint is a message entry point that can inject messages directly from the transport layer to the mediation layer, without going through the Axis engine. The following diagram illustrates the inbound endpoint architecture. For proxy, we have to invoke it. but a inbound can be invoked by itself. There are three types of inbounds in WSO2 ESB.
  • Listening Inbound Endpoint: A listening inbound endpoint listens on a given port for requests that are coming in. When a request is available it is injected to a given sequence.
  • Polling Inbound Endpoint: A polling inbound endpoint polls periodically for data and when data is available the data is injected to a given sequence.
  • Event based Inbound Endpoint: An event-based inbound endpoint polls only once to establish a connection with the remote server and then consumes events.
Here we mostly focus on Polling Inbound Endpoint as it deal with file in periodic manner. Polling inbound endpoint available with WSO2 ESB are given below.
  • File Inbound Protocol
  • JMS Inbound Protocol
  • Kafka Inbound Protocol
File Inbound Protocol 

The WSO2 ESB file inbound protocol is a multi-tenant capable alternative to WSO2 ESB VFS transport. In previous post we got the idea about VFS transport. Same functionality can be done through file inbound. we can specify streaming facility in WSO2 ESB 5.0 in file inbound. For Streaming we have to add message builder and formatter in the <ESB_HOME>/repository/conf/axis2/axis2.xml file.
  • Add <messageFormatter contentType="application/file"
            class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> under the message formatters.
  • Add  <messageBuilder contentType="application/file"
            class="org.apache.axis2.format.BinaryBuilder"/> under the message builders.

Steps 
  •  Start ESB 5.0 by going to <ESB_HOME>/bin.
  •  You can get file connector from the store (https://store.wso2.com/store/assets/esbconnector/aec1554a-29ea-4dbb-b8c5-5d529a853aa2).
  •  Add the connector by going to Connectors->Add in WSO2 ESB management console and browse connector zip file and enable it.
  • Go to Service bus-> sequence  in ESB management console and add following lines in source view to specify the connector configuration.

    <?xml version="1.0" encoding="UTF-8"?>
    <sequence xmlns="http://ws.apache.org/ns/synapse">
           <property name="OUT_ONLY" value="true"/>
           <property action="remove" name="ClientApiNonBlocking" scope="axis2"/>
           <fileconnector.send>
            <address>file:///xxx/xxx/Desktop/testESB/OutTest</address>
            <append>true</append>
           </fileconnector.send>
        <drop/>
    </sequence>


    Here we specify the where do we want to send the file in address. Then save this sequence in registry.
  • Then go to Service bus->Inbound Endpoints-> Add Inbound Endpoint. After that specify the name and select Type as File. Then press next. In next window you have to specify sequence. u already saved your sequence in registry. Then you can select configuration registry. Then you will get following window and select your sequence.


 In error sequence you can put as fault. In suspend you initially put as false. If suspend is true, then the file inbound will not invoke. Put sequential and coordination as true. Specify the FileURI where you have the file and specify content type. if you want to use streaming specify contentType as application/file. Then click on show advanced options. In that you can specify the streaming as true if you want to send big files. ActionAfterProcess is the one what do you want to do after send the file to that specific location. you can specify MOVE or DELETE. we have to specify this to avoid recursive sending the same file again and again. You can get details of other parameters in here. Then save the inbound.
  • When you put a file in the fileURI location, you can see the response file in the address location you specified.
Note: You can use file send method in File connector not only for VFS transport, but also HTTP, JMS, FTP to send files.

Tuesday, May 17, 2016

Streaming VFS File Transfer using WSO2 ESB

Brief idea for VFS

VFS means Virtual File System. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference.

Brief Intro to WSO2 ESB VFS transport with streaming

Through this we can pick up a file from a directory and process it within the ESB. You can get more about WSO2 ESB VFS transport from here. But when we using this VFS configurations for transferring large files (Greater that 500 MB), we get an Out Of Memory exceptions. In this time we can use streaming facility. To enable streaming, we have to add message builder "org.apache.axis2.format.BinaryBuilder" for this. Apart from that  we need to include the property "ClientApiNonBlocking" in the proxy configuration. To use vfs transport facility from WSO2 ESB, there are some steps you have to follow.
  1. Enable the VFS transport by editing the <ESB_HOME>/repository/conf/axis2/axis2.xml file and uncomment the VFS listener and the VFS sender as follows:
    <transportreceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>
    ...
    <transportSender name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportSender"/>
  2.  Add message builder and formatter in the <ESB_HOME>/repository/conf/axis2/axis2.xml file.
    • Add <messageFormatter contentType="application/file"
              class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> under the message formatters.
    • Add  <messageBuilder contentType="application/file"
              class="org.apache.axis2.format.BinaryBuilder"/> under the message builders.
       
  3. You have to add a proxy like below.

    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns="http://ws.apache.org/ns/synapse"
           name="FileProxy"
           transports="vfs"
           startOnLoad="true"
           trace="disable">
       <description/>
       <target>
          <inSequence>
             <log level="custom">
                <property name="FileProxy" value="Processing file"/>
             </log>
             <property name="OUT_ONLY" value="true"/>
             <property name="ClientApiNonBlocking"
                       value="true"
                       scope="axis2"
                       action="remove"/>

             <send>
                <endpoint name="FileEpr">
                   <address uri="vfs:file:///home/Desktop/testESB/out"/>
                </endpoint>
             </send>

          </inSequence>
       </target>
       <parameter name="transport.vfs.Streaming">true</parameter>
       <parameter name="transport.PollInterval">20</parameter>   <parametername="transport.vfs.ActionAfterProcess">MOVE</parameter>
       <parameter name="transport.vfs.FileURI">file:///home/Desktop/testESB/in</parameter>
       <parameter name="transport.vfs.MoveAfterProcess">file:///home/Desktop/testESB/original</parameter>
       <parameter name="transport.vfs.MoveAfterFailure">file:////home/Desktop/testESB/failure</parameter>
       <parameter name="transport.vfs.Append">true</parameter>
       <parameter name="transport.vfs.Locking">enable</parameter>
       <parameter name="transport.vfs.FileNamePattern">.*.zip|.*.txt</parameter>
       <parameter name="transport.vfs.ContentType">application/file</parameter>
       <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
    </proxy>

    Here we specify ActionAfterProcess for doing some process after that file send to the specific location. We can specify MOVE/DELETE for this.

    We can specify source file to send from transport.vfs.FileURI. Destination of the file where we want to send that file can be specified in <address uri="vfs:file:///home/Desktop/testESB/out"/>

    This can be done through file inbound/ file connector. Next blog will cover about it. Until that bye. :)

 

Friday, April 15, 2016

Integration Test using TestNG for Connectors


Hi all, I think you had viewed my last post regarding how to write a connector. Here we are going to see about integration test for connectors which is used to test the connector automatically.
 
What is TestNG?

TestNG is a testing framework designed to simplify a vast range of testing needs, from unit testing to integration testing. A TestNG test can be configured by @BeforeXXX and @AfterXXX annotations which allows to perform some Java logic before and after a certain point, these points being either of the items listed above. If you want to study more about TestNG framework, there is an official web site you could refer. TestNG supports for Eclipse, IntelliJ IDEA, and NetBeans.

What is Integration Test?

Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups.

Writing Integration test for Connectors using TestNG
 
Writing a test is typically a three-step process:
  • Write the business logic of your test and insert TestNG annotations in your code.
  • Add the information about your test (e.g. the class name, the groups you wish to run, etc...) in a testng.xml file or in build.xml.
  • Run TestNG.
 When you clone a repository from the git hub of WSO2, you can view the structure of the project like below. There you can view the test folder. Inside that now we are going to change things according to our need.


 In there we will first see testng.xml and what are the things we have to change according to our need. The structure of the testing.xml is given below.


The things in connectorName has to be changed according to our connector name. After that we have to create JAVA class under the src->test->java->org.wso2.carbon.connector.integration.test->ConnetorName.
 
In the class we have to import TestNG classes  like below.

import org.testng.Assert;import org.testng.annotations.BeforeClass;import org.testng.annotations.Test;import org.wso2.connector.integration.test.base.ConnectorIntegrationTestBase;

In the java class we have set the environment of the API by giving API url, username, password or OAuth authentication. Setting environment should be done before the test start. So we tell as "@BeforeClass(alwaysRun = true)" before that method.

In the test cases we have to test all available methods which are provided by the API. We have to check the things which are getting through WSO2 ESB is equal to the things getting from direct call to the API. we can import Assert class and check whether
  • both HttpStatusCode are equal.
    Assert.assertEquals(esbRestResponse.getHttpStatusCode(), 200);Assert.assertEquals(apiRestResponse.getHttpStatusCode(), 200);
  • the things come from body messages are equal.
    Assert.assertEquals(esbRestResponse.getBody().getJSONArray("result").getJSONObject(1).get("number")
     apiRestResponse.getBody().getJSONArray("result").getJSONObject(1).get("number"));
We have to check each test cases giving three type of input such as mandatory, optional and negative.

Post method should be tested by compare the http status code is equal to 201 and get the last inserted value and check whether those values are posted before. Updating methods also have to test like this.

The values for the testing has to be given in the format which can be understand by the API whether it is Soap or JSON.

We have to create request.connectorName folder inside the config folder and write the request. the example of the JSON request is given below.

{
  "apiURL":"www.abc.com",  "username":"username",  "password":"password"
}

We have to create proxies.connectorName folder inside the config folder and we have to write the proxy for it.

You can see the already available connectors and build your own connector. 

Happy Coding :)



Tuesday, March 29, 2016

Brief Idea of WSO2 ESB Connectors

What is connector?

Connectors allow you to connect and interact with services which are provided by outside parties. A connector is a collection of templates which define set of operations which are given by outside party. Connectors are used to wrap the API of an external service. For writing a connector, we should research the API provided by the service. Then we have to decide which API we are going to use to write our connector. After that we can implement the operations which are provided by the API. Most of outside parties offer two services such as REST and SOAP in their APIs.

Why Connector?

I am pretty sure that most of the people think why we need to use connector. There are many external parties providing API to use their methods and tables. Most of us know about Google Blog API which is providing many methods such as getting get Blog, get comment, get post and etc. If you want to use those functions in your software, then you have to implement all those functions in your software. But in a connector already the methods are implemented. By using connector in your code you could save your time by recreating it. As all you know WSO2 products are open source, you can download the connectors and can use it.

Writing ESB Connector

You can clone ESB connectors from the following git hub link "https://github.com/wso2/esb-connectors".
$ git clone https://github.com/wso2/esb-connectors

After you clone there are lot of connectors you can get. They can be SOAP connector or REST connector or Java connector. So you can select one of the connector according your task and modify it. 

Open a connector according to your API in IntelliJ. You can see below structure. 

Here you can see some folder structure.
  • Repository: Here you have to put your WSO2 ESB which you downloaded.
  • Resources: Here you have to implement your API methods. Some outside party can provide set of grouped methods. So that set methods must come in one folder. 
    • config: This folder contains init.xml and component.xml.
      • init.xml: Initialize the connector environment. Here we specify username, password, API url, Access token and etc which are related to initialize the API.
      • component.xml: This file is included in every module (like folder). This defines available methods in the module. In this folder we define init.xml and its desvription. component.xml will be like this.
        <component name="config" type="synapse/template"> 
              <subComponents> 
                  <component name="init"> 
                       <file>init.xml</file> 
                       <description>Configuration.</description> 
                   </component> 
             </subComponents>
         </component>
    •  icon: Here you have to put API's icons.

      After that you can create own folder structure and within the folder you can create your synapse template like below.
      <template name="method" xmlns="http://ws.apache.org/ns/synapse"> 
         <parameter name="param" description="The parameter which needs to give to API./>    
           <sequence> 
              <property name="uri.var.param" expression="$func:param"/>
                <call> 
                   <endpoint> 
                        <http method="GET" uri-template="{uri.var.URL}"/> 
                   </endpoint> 
               </call> 
          </sequence> 
      </template>
  • connector.xml: This defines connector name and dependent modules.
    <?xml version="1.0" encoding="UTF-8"?>
    <connector> 
       <component name="compo" package="org.wso2.carbon.connector"> 
             <dependency component="config"/>        
            <description>WSO2 Connector for API.</description> 
      </component> 
    </connector>
  • pom.xml: Contains the required dependencies for connector core libraries and relevant synapse libraries as well as maven repositories for a specific connector.

    In my next blog I will cover Integration Test using TestNG. :)


Wednesday, March 16, 2016

Working with Import Set API in Service Now


What is ServiceNow?

ServiceNow is a software platform that supports IT service management and automates common business processes. This software as a service (SaaS) platform contains a number of modular applications that can vary by instance and user.

ServiceNow has two types of APIs such as SOAP and REST API. Most functions are same in those two APIs. ServiceNow offers three types of REST APIs such as Table API, Aggregate API and Import set API. There are some methods and tables in those APIs.

Accessing Import Set API differ from other APIs because we cannot directly access the records in import set tables. We have to give ACL permission for that. This blog illustrates about Import Set API.

What is Import Set API?

The ServiceNow Import Set API provides a REST interface for import set tables. The API transforms incoming data based on associated transform maps. The import set API supports synchronous transforms. Access to tables via the REST API is restricted by BasicAuth and the rest_service ACL

Import Set API provides two methods such as POST record and GET record by sys_id. When you create an instance four tables will be provided such as imp_computer, imp_location, imp_notification and imp_user. You can create your own tables also. The user should have rest_service role to access above two methods.

In this blog, I am discussing about how to give access to the table imp_computer and  create an exclusive ACL for rest_service role.

Giving Application Access to the records in the table
1. Go to the link "https://instancename.service-now.com/imp_computer_list.do " Here you have to specify your own instance name. After that below page will be displayed.
2. Then Right Click->Configure->Table


3. Then  a window like below will ,appear. In this window you can check the check box "can create", "can read", "can update" and "can delete" according to your requirement.

Create an exclusive ACL for rest_service role, giving access to the table

Before creating ACL for rest_service role, we have to create rest_service role for the user.
1. Elevating to a privileged role: For this type as "roles" in your filter of the instance. Then click on the Roles which appear in the list (left panel). Following window will appear. 


Then make Elevated privilege as true to security_admin

 After that click on a lock icon Icon-elevated.png which appears next to the user's name in the header.

 Then a pop up window will appear. In that window you have to tick security_admin.

2. Navigate to System Security > Access Control (ACL) in your instance. Following window will appear.

3. Click New.
4.Define the object the ACL rule secures and the permissions required to access the object. Put a read ACL for rest_service role- you should have security_admin role to create new ACLs: Incident.none and Incident.*



Now you have given ACL permission for that record. Then you can access Get Record by specifying sys_id method can be accessed.

The sys_id that is returned during a POST call to an import set table is not the sys_id of the record in the import set table.  It is the sys_id in the target table(where the record was transformed to)

To get the sys_id, as an admin, you have a multitude of options

Option 1: Navigate to the record (https://instancename.service-now.com/imp_computer_list.do) in a list, right click and copy sys_id.




Option 2: Open the record in a new window, without the frames.  The URL will contain the sys_id


Option 3: While in the record, right click on the header, select show XML, and find sys_id in there.



After getting the sys_id go to REST API Explorer by typing "rest" in the filter in your instance.

Then specify the table name as "imp_computer" and your copied sys_id and click send.

The Response message will like below.