Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
How To Scale Your Python Services
Gossip Protocol in Social Media Networks: Instagram and Beyond
Chicago is famous for many reasons, including the Bears, a specific style of hot dogs, and, of course, for giving the world skyscrapers. PHP is also known for legendary architecture, being the underlying language for 77.5% of the web via frameworks like Laravel, Drupal, and WordPress. Community members from all over the world, representing all those frameworks and more, got together for php[tek] 2023. This was the 15th annual convention of PHP, where users shared knowledge and best practices for leveraging the language that came to define the internet over the last 28 years. There was a real sense of community at the event, summarized very succinctly in the day one keynote, "Let Go of Ownership," from Tim Lytle. He encouraged us to think about our code and the community as not things we own but instead, as things we are entrusted to take care of over time. He said we should think in terms of stewardship, which is a word that sums the subject up nicely. Over the three days of the event, speakers told their stories about working with PHP and the opportunities it has afforded them. They also dove into some highly technical topics, even showing how PHP itself is compiled. Multiple speakers also covered security and customer data compliance. Here are just a few highlights from the event. #phptek keynnote from @tjlytle "Let Go of Ownership" Hearing the story of how he got here and how we can model life while remembering "all models are wrong, but some are useful" @phptek. API Security Is Critical In his talk, "The Many Layers of OAuth," Keith Danger Casey walked us through OAuth, the open protocol to allow secure authorization. He described OAuth through the analogy of a fancy hotel. In a hotel, you present your credit card and other form of ID to the front desk to prove you are who you say you are. They check you are authentic and expected. They then issue you a hotel key card to get into your room, the gym, and any other restricted areas. The benefits of the key card are that you do not need to constantly re-prove who you are with your complete ID and credit card at all times. The key cards also automatically expire and are easily replaceable. In the OAuth language, the front desk is the OAuth Authorization Server. The key card is your Access Token. Your room and all the other areas where you are allowed access with your key card are the system Resources. This model achieves the main goals of OAuth: Delegation: Sharing access without sharing credentials. Scoping and Expiration: Granting limited access for a short amount of time. Separating policy decisions from enforcement mechanisms. One crucial point that Keith noted is that OAuth itself does not specify how you do the authentication, just authorization. Authentication, often abbreviated as AuthN, verifies you are who you say you are. This is commonly achieved through opening a web browser and having you log in through another trusted service like GitHub or Google, relying on OpenID Connect. Authorization, abbreviated as AuthZ, is concerned with 'if' you are allowed to perform an action or access a resource. You end up with a three-step security process where you prove who you are, AuthN, then get approval to reach certain resources, AuthZ, before finally accessing those resources by using the token the process provides. Attackers commonly target each of these steps and the connections required throughout the process. It is vital to think through security at each of these vectors. This starts by always using HTTPS to prevent man-in-the-middle attacks. It is also important to scope any tokens appropriately, only allowing authorization for the resources required to complete the work. Tokens also need to be short-lived; the shorter the time to live, the better. The one and only @caseySoftware from @ngrokHQ talking about #OAuth at #phptek@phptek. Keep Your Webhooks Safe as Well Keith also echoed a lot of these same lessons about security in his other talk at the event, "Webhooks: Lessons (Un)learned." Keith was responsible for the initial research that became the website webhooks.fyi. While investigating webhooks, he realized that every company does them slightly differently, but there are some underlying security concerns that we all need to be aware of. It is vital to secure the payload itself. There are a number of ways to accomplish this, from having shared secrets or using OAuth to much more secure methods like keyed-hash message authentication codes, HMAC, or mTLS, Mutual Transport Layer Security. It is also important to protect against 'replay attacks' by using timestamps. We are proud to say that GitGuardian Custom Webhooks make use of HMAC and Timestamps to keep our customers safe. Webhooks: Lessons (Un)learned @caseySoftware from @ngrokHQ at #phptek Check out https://webhooks.fyi#tek2023@phptek. Back on the topic of APIs, Tim Bond talked about external threats in his session: "Attackers want your data, and they're getting it from your API." He said APIs are everywhere, including, in the broadest sense, the front of your website. The first step to securing your API is limiting the responses to only the data absolutely needed to make the app work. HTTPS should always be enforced, echoing what Keith said earlier in the event. He also encouraged using "certificate pinning," where you only accept specific, pre-approved certificates. If possible, he suggests enforcing dynamic integrity checking, as you can do through the Google Play store. One way you can discourage attackers is by rate limiting. Hackers will often try to enumerate endpoints, especially around user IDs. Someone looking up `user/123`, `user/124`, then `user/125` in rapid calls is likely someone up to no good. Shutting them down should not interfere with legitimate business. Further, he suggested using Unique User IDs, UUIDs, so instead of sequential user numbers, each is assigned a long random number that is unrelated to other user IDs. For example, instead of `user/123`, making them `user/SINFKLDFDF51F` will make it harder for an attacker to guess what other user IDs could be. Toward the end of his session, Tim suggested familiarising yourself with the OWASP API Security Top 10. For those who wanted to dig deeper, he suggested the free training course from PortSwigger. "Attackers want your data and they're getting it from your API" from @TimB0nd at #phptek@phptek#tek2023. Data Privacy Is the Law Data privacy laws are always evolving, and it can be tricky to keep up to date with the latest news. That is why we were all glad for the session "Data Privacy in Software Development" by Jana Sloane, an attorney at Microsoft. She was quick to state that this session was not giving legal advice but was intended to point us in the right direction to know how to talk to internal legal teams. Having those conversations early in the development lifecycle can help keep everyone compliant and safe. Jana gave us a brief overview of today's data privacy landscape. In the US, every state has implemented its own framework. In the EU, it is a little clearer, thanks to legislation like GDPR, but she said there is a lot of case law being worked out right now, so talking to legal teams earlier in the process can help you stay ahead of what is on the horizon. In addition to government regulation, software developers need to be aware of any contractual obligations their company must comply with. For example, ensuring your new feature or product will still fall within SOC II compliance is important so there are no surprises when you try to launch. When thinking about access management, who can get our data, we need to ensure data is: Necessary and proper: We are only collecting what is truly needed for the application to work. Accessed by proper personnel: There is a clear log and authorization policy in place for anyone or any service that can obtain the data. Used correctly: If you say exactly what you will use the data for in the terms of service, you must limit the use to only those purposes. Retained accurately: Properly storing data means encrypting the data properly and thinking through geolocation issues, only storing it in places allowed by data sovereignty law. Lastly, you should have a clear policy for how long you are allowed to keep use data. It should not be forever. Your policy should also allow the user to request for it to be deleted at any time. Any time you want to use the data for a new or different reason, you need to inform the customer and have them opt-in for the new use, letting them opt out of the system if they choose. A full room for @Jana_Esq talk at #phptek "Data Privacy in Software Development," A very important issue at the core of what our industry builds. Scanning for Better Code Scott Keck-Warren began his session "Reducing Bugs With Static Code Analysis" by telling the story of breaking live production websites when he tried to fix bugs on the live server. He quickly learned that there needed to be a way to test his fix before it got to the production machine. His team moved to manual code analysis, which was a step up from breaking production but was slow and error-prone. Human beings were still too involved in the process. His team moved next to dynamic testing. While this is much more reliable overall, it takes a while to run, reliable though. What they finally found that was both fast and reliable was a form of source code analysis, or SCA, called static code analysis. This allows the code to be analyzed without needing to go through a build step and can save a lot of time and resources. He found PHP-specific tools like PHPStan and PHP_CodeSniffer were a good fit for their needs, given the codebase was mostly PHP. He also is a fan and user of Rector, a tool that "instantly upgrades and refactors the PHP code of your application." What made these tools truly successful for his org was consistent use through automation. His favorite way of automating testing is through git hooks. We love git hooks at GitGuardian, as that is how you can leverage ggshield to prevent yourself from committing secrets. We are also big believers in source code analysis, especially for security. This is why we have officially partnered with Snyk to help our users and the world strengthen developer security through SCA. While the tools Scott cited are excellent for debugging PHP code for functionality, Snyk can help any developer deliver more secure code no matter what language your company relies on. Hearing a story about a bug fix on a live server and seeing the customer's reaction to breaking production at the start of @scottkeckwarren talk at #phptek. "Reducing Bugs With Static Code Analysis." Good Software Requires a SOLID Foundation When you think of approaches to building software, you might think of Agile, Waterfall, or even DevOps. However, there is a concept underneath all those approaches that deal with how to think about the code itself. Cori Lint covered this in her talk, "Building a SOLID Foundation." The SOLID framework was introduced to the world in a 2000 paper from Robert C. Martin defining best practices for Object-oriented Programming, OOP. OOP is the predominant approach of modern software languages and frameworks.SOLID stands for: Single Responsibility Principle: Each component should have only one responsibility. Open-Closed Principle: OOP entities should be open for extension but closed for modification. Liskov Substitution Principle: Objects of a superclass should be replaceable with objects of its subclasses without breaking the application Interface Segregation Principle: A class shouldn’t have to implement members that it does not need. Dependency Inversion Principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Cori gave multiple examples of these principles, including a `PlayInstrument` class. One can imagine a class for plying instruments that implements the methods: toot() pressKey() bowString() pluckString() Let's imagine we try to use 'PlayInstrument' to play a violin. Violins can't toot() or pressKey(). Thus, this class violates the Interface Segregation Principle, and we should find a better approach. You could do this by creating new classes to replace the generic `PlayInstrument` class, one for wind instruments, one for string instruments, and perhaps new ones for percussion. These new classes would be simpler and reusable, making the program ultimately more resilient and easier to implement in code. Building a SOLID Foundation By @CoriLint At #phptek. A Community Supporting Multiple Communities PHP is at the heart of the internet, taking the form of many frameworks and languages behind many services. Just as the code is widespread and used in diverse ways, the community itself varies from security experts focused on APIs to traditional website builders to microservice architects. It is truly a global community, as we had folks from all over the world attend php[tek]. No matter where you are on the planet or what particular focus you have in your day-to-day work, security surely lies at the heart of it. We are proud to support developers, DevOps, and security teams as they work to make their code more secure by keeping their secrets secret. WurstCon @WurstCon, Safe travels to everyone flying home from #phptek!
In this blog post, you will learn how to build a Serverless solution for entity detection using Amazon Comprehend, AWS Lambda, and the Go programming language. Text files uploaded to Amazon Simple Storage Service (S3) will trigger a Lambda function which will further analyze it, extract entity metadata (name, type, etc.) using the AWS Go SDK, and persist it to an Amazon DynamoDB table. You will use Go bindings for AWS CDK to implement "Infrastructure-as-code" for the entire solution and deploy it with the AWS Cloud Development Kit (CDK) CLI. The code is available on GitHub. Introduction Amazon Comprehend leverages NLP to extract insights from documents, including entities, key phrases, language, sentiments, and other elements. It utilizes a pre-trained model that is continuously updated with a large body of text, eliminating the need for training data. Additionally, users can build their own custom models for classification and entity recognition with the help of Flywheels. The platform also offers built-in topic modeling to organize documents based on similar keywords. For document processing, there is the synchronous mode for a single document or a batch of up to 25 documents, while asynchronous jobs are recommended for processing large numbers of documents. Let's learn Amazon Comprehend with a hands-on tutorial. We will be making use of the entity detection feature wherein, Comprehend analyzes the text and identifies all the entities present, as well as their corresponding entity type (e.g. person, organization, location). Comprehend can also identify relationships between entities, such as identifying that a particular person works for a specific company. Automatically identifying entities within large amounts of text data can help businesses save time and resources that would otherwise be spent manually analyzing and categorizing text data. Prerequisites Before you proceed, make sure you have the following installed: Go programming language (v1.18 or higher) AWS CDK AWS CLI Clone the project and change to the right directory: git clone https://github.com/abhirockzz/ai-ml-golang-comprehend-entity-detection cd ai-ml-golang-comprehend-entity-detection Use AWS CDK To Deploy the Solution The AWS Cloud Development Kit (AWS CDK) is a framework that lets you define your cloud infrastructure as code in one of its supported programming and provision it through AWS CloudFormation. To start the deployment, simply invoke cdk deploy and wait for a bit. You will see a list of resources that will be created and will need to provide your confirmation to proceed. cd cdk cdk deploy # output Bundling asset ComprehendEntityDetectionGolangStack/comprehend-entity-detection-function/Code/Stage... ✨ Synthesis time: 4.32 //.... omitted Do you wish to deploy these changes (y/n)? y Enter y to start creating the AWS resources required for the application. If you want to see the AWS CloudFormation template which will be used behind the scenes, run cdk synth and check the cdk.out folder. You can keep track of the stack creation progress in the terminal or navigate to the AWS console: CloudFormation > Stacks > ComprehendEntityDetectionGolangStack. Once the stack creation is complete, you should have: An S3 bucket - Source bucket to upload text file A Lambda function to execute entity detection on the file contents using Amazon Comprehend A DyanmoDB table to store the entity detection result for each file A few other components (like IAM roles, etc.) You will also see the following output in the terminal (resource names will differ in your case). In this case, these are the names of the S3 bucket and the DynamoDB table created by CDK: ✅ ComprehendEntityDetectionGolangStack ✨ Deployment time: 139.02s Outputs: ComprehendEntityDetectionGolangStack.entityoutputtablename = comprehendentitydetection-textinputbucket293fcab7-8suwpesuz1oc_entity_output ComprehendEntityDetectionGolangStack.textfileinputbucketname = comprehendentitydetection-textinputbucket293fcab7-8suwpesuz1oc ..... You can now try out the end-to-end solution! Detect Entities in Text File To try the solution, you can either use a text file of your own or the sample files provided in the GitHub repository. I will be using the S3 CLI to upload the file, but you can use the AWS console as well. export SOURCE_BUCKET=<enter source S3 bucket name - check the CDK output> aws s3 cp ./file_1.txt s3://$SOURCE_BUCKET aws s3 cp ./file_2.txt s3://$SOURCE_BUCKET # verify that the file was uploaded aws s3 ls s3://$SOURCE_BUCKET This Lambda function will extract detect entities and store the result (entity name, type, and confidence score) in a DynamoDB table. Check the DynamoDB table in the AWS console: You can also use the CLI to scan the table: aws dynamodb scan --table-name <enter table name - check the CDK output> Don’t Forget To Clean Up Once you're done, to delete all the services, simply use: cdk destroy #output prompt (choose 'y' to continue) Are you sure you want to delete: ComprehendEntityDetectionGolangStack (y/n)? You were able to set up and try the complete solution. Before we wrap up, let's quickly walk through some of the important parts of the code to get a better understanding of what's going on behind the scenes. Code Walkthrough We will only focus on the important parts - some code has been omitted for brevity. CDK You can refer to the complete CDK code here. bucket := awss3.NewBucket(stack, jsii.String("text-input-bucket"), &awss3.BucketProps{ BlockPublicAccess: awss3.BlockPublicAccess_BLOCK_ALL(), RemovalPolicy: awscdk.RemovalPolicy_DESTROY, AutoDeleteObjects: jsii.Bool(true), }) We start by creating the source S3 bucket. table := awsdynamodb.NewTable(stack, jsii.String("entites-output-table"), &awsdynamodb.TableProps{ PartitionKey: &awsdynamodb.Attribute{ Name: jsii.String("entity_type"), Type: awsdynamodb.AttributeType_STRING}, TableName: jsii.String(*bucket.BucketName() + "_entity_output"), SortKey: &awsdynamodb.Attribute{ Name: jsii.String("entity_name"), Type: awsdynamodb.AttributeType_STRING}, }) Then, we create a DynamoDB table to store entity detection results for each file. function := awscdklambdagoalpha.NewGoFunction(stack, jsii.String("comprehend-entity-detection-function"), &awscdklambdagoalpha.GoFunctionProps{ Runtime: awslambda.Runtime_GO_1_X(), Environment: &map[string]*string{"TABLE_NAME": table.TableName()}, Entry: jsii.String(functionDir), }) table.GrantWriteData(function) bucket.GrantRead(function, "*") function.Role().AddManagedPolicy(awsiam.ManagedPolicy_FromAwsManagedPolicyName(jsii.String("ComprehendReadOnly"))) Next, we create the Lambda function, passing the DynamoDB table name as an environment variable to the function. We also grant the function access to the DynamoDB table and the S3 bucket. We also grant the function access to the ComprehendReadOnly managed policy. function.AddEventSource(awslambdaeventsources.NewS3EventSource(sourceBucket, &awslambdaeventsources.S3EventSourceProps{ Events: &[]awss3.EventType{awss3.EventType_OBJECT_CREATED}, })) We add an event source to the Lambda function to trigger it when an invoice image is uploaded to the source bucket. awscdk.NewCfnOutput(stack, jsii.String("text-file-input-bucket-name"), &awscdk.CfnOutputProps{ ExportName: jsii.String("text-file-input-bucket-name"), Value: bucket.BucketName()}) awscdk.NewCfnOutput(stack, jsii.String("entity-output-table-name"), &awscdk.CfnOutputProps{ ExportName: jsii.String("entity-output-table-name"), Value: table.TableName()}) Finally, we export the S3 bucket and DynamoDB table names as CloudFormation output. Lambda Function You can refer to the complete Lambda Function code here. func handler(ctx context.Context, s3Event events.S3Event) { for _, record := range s3Event.Records { sourceBucketName := record.S3.Bucket.Name fileName := record.S3.Object.Key err := detectEntities(sourceBucketName, fileName) } } The Lambda function is triggered when a text file is uploaded to the source bucket. For each text file, the function extracts the text and invokes the detectEntities function. Let's go through it. func detectEntities(sourceBucketName, fileName string) error { result, err := s3Client.GetObject(context.Background(), &s3.GetObjectInput{ Bucket: aws.String(sourceBucketName), Key: aws.String(fileName), }) buffer := new(bytes.Buffer) buffer.ReadFrom(result.Body) text := buffer.String() resp, err := comprehendClient.DetectEntities(context.Background(), &comprehend.DetectEntitiesInput{ Text: aws.String(text), LanguageCode: types.LanguageCodeEn, }) for _, entity := range resp.Entities { item := make(map[string]ddbTypes.AttributeValue) item["entity_type"] = &ddbTypes.AttributeValueMemberS{Value: fmt.Sprintf("%s#%v", fileName, entity.Type)} item["entity_name"] = &ddbTypes.AttributeValueMemberS{Value: *entity.Text} item["confidence_score"] = &ddbTypes.AttributeValueMemberS{Value: fmt.Sprintf("%v", *entity.Score)} _, err := dynamodbClient.PutItem(context.Background(), &dynamodb.PutItemInput{ TableName: aws.String(table), Item: item, }) } return nil } The detectEntities function first reads the text file from the source bucket. It then invokes the DetectEntities API of the Amazon Comprehend service. The response contains the detected entities. The function then stores the entity type, name, and confidence score in the DynamoDB table. Conclusion and Next Steps In this post, you saw how to create a serverless solution using Amazon Comprehend. The entire infrastructure life-cycle was automated using AWS CDK. All this was done using the Go programming language, which is well-supported in AWS Lambda and AWS CDK. Here are a few things you can try out to extend this solution: Try experimenting with other Comprehend features such as Detecting PII entities. The entity detection used a pre-trained model. You can also train a custom model using the Comprehend Custom Entity Recognition feature that allows you to use images, scanned files, etc. as inputs (rather than just text files). Happy building!
High-level and low-level programming languages are two distinct categories of programming languages used for writing computer programs. They differ significantly in terms of their level of abstraction, ease of use, and the types of tasks they are best suited for. In this extensive discussion, we'll explore the differences between these two language categories in detail. High-Level Programming Languages 1. Abstraction Level High-level programming languages are designed with a high level of abstraction. This means that they provide programmers with a set of easy-to-understand and human-readable commands and structures. These languages abstract away many of the low-level details of the computer's hardware, making it easier for developers to focus on solving problems rather than managing hardware-specific intricacies. 2. Readability and Ease of Use High-level languages are known for their readability and ease of use. Programmers can write code that closely resembles human language, which makes it more accessible to a wider range of developers. This readability often leads to shorter development times, as code can be written and maintained more efficiently. 3. Portability High-level languages are typically portable, meaning that code written in one high-level language can often be run on different computer architectures or operating systems with minimal modification. This portability is facilitated by the use of interpreters or compilers that translate high-level code into machine code or an intermediate representation. 4. Productivity High-level languages are designed to enhance programmer productivity. They provide built-in functions and libraries that simplify common tasks. Programmers can focus on problem-solving and application logic rather than getting bogged down in low-level details. 5. Examples Examples of high-level programming languages include Python, Java, C++, JavaScript, Ruby, and PHP. Python, for instance, is known for its simplicity and readability, making it a popular choice for beginners and experienced developers alike. 6. Performance High-level languages generally sacrifice some level of performance for ease of use and portability. They rely on interpreters or compilers to convert code into machine code, which can introduce some overhead. While high-level languages can be optimized for performance in many cases, they may not be as efficient as low-level languages for certain types of tasks, such as system-level programming. Low-Level Programming Languages 1. Abstraction Level Low-level programming languages are closer to the hardware and have a lower level of abstraction. They provide more direct control over the computer's hardware resources. Programmers working with low-level languages have to manage memory, registers, and hardware-specific details explicitly. 2. Readability and Ease of Use Low-level languages are known for their reduced readability and increased complexity. They often involve working with cryptic symbols and require a deep understanding of computer architecture. Writing code in low-level languages can be error-prone and time-consuming, as programmers must handle many low-level details. 3. Portability Low-level languages are generally not portable. Code written in a low-level language is often specific to a particular computer architecture or operating system. To run on different platforms, code must be rewritten or adapted for each target system. 4. Productivity Low-level languages can be less productive for most application development tasks because they require more effort and time to write and debug. They are typically reserved for specialized tasks where fine-grained control over hardware is necessary. 5. Examples Examples of low-level programming languages include Assembly language and C. Assembly language provides a symbolic representation of machine code instructions, while C offers a higher level of abstraction compared to Assembly but still allows for close control over hardware. 6. Performance Low-level languages can deliver superior performance in situations where fine-tuned control over hardware is critical. For tasks like operating system development, device drivers, and embedded systems, low-level languages are often preferred. They allow for efficient use of system resources and direct manipulation of memory and hardware registers. Use Cases and Trade-Offs High-Level Languages High-level languages are ideal for a wide range of application development tasks, including web development, data analysis, scientific computing, and more. They are the preferred choice for rapid development, prototyping, and projects where performance is not the primary concern. High-level languages abstract away complexity, making them suitable for programmers with varying levels of expertise. Low-Level Languages Low-level languages are essential for system-level programming tasks, such as developing operating systems, device drivers, and firmware for embedded systems. They are used in situations where absolute control over hardware and maximum performance are required. Programmers working with low-level languages typically have a deep understanding of computer architecture and hardware. Translating Between High-Level and Low-Level Languages In practice, it is common to use both high-level and low-level languages within a single project or software ecosystem. This is often achieved through the use of libraries and interfaces. High-level languages may include mechanisms for calling functions or using libraries written in low-level languages. Conversely, low-level languages may provide ways to interface with high-level languages or use their libraries. This mix allows developers to leverage the strengths of each type of language while managing the trade-offs. Choosing Between High-Level and Low-Level Languages The choice between high-level and low-level languages depends on the specific requirements of a project: If rapid development, readability, and portability are essential, a high-level language is a better choice. If maximum control over hardware or high-performance optimization is necessary, a low-level language may be more appropriate. In many cases, developers use a combination of both to balance productivity and performance. Conclusion High-level and low-level programming languages serve different purposes in the world of software development. High-level languages prioritize ease of use, readability, and portability, making them suitable for a wide range of applications. Low-level languages offer fine-grained control over hardware and exceptional performance, making them indispensable for system-level programming. The choice between these two types of languages depends on project requirements, with many developers and projects benefiting from a combination of both to harness the advantages of each. Understanding the distinctions between high-level and low-level languages empowers programmers to make informed decisions when selecting the right tool for the job, ultimately leading to more efficient and effective software development.
Cascading Style Sheets (CSS) are the stylistic heartbeat of the web, allowing developers to orchestrate visually stunning and user-friendly interfaces. The elegance and functionality of web applications are largely influenced by two core principles of CSS—nesting and the cascade. This article endeavors to shed light on these principles, elucidating their nuances, benefits, and implementation strategies, enriched with practical examples and concrete illustrations. Delving Into CSS Nesting Definition of CSS Nesting CSS Nesting is a technique where CSS selectors are placed inside the scope of other selectors, reflecting the hierarchical structure of HTML elements and aiding in organizing and structuring the stylesheet more logically and clearly. Advantages of CSS Nesting Nesting provides: Organizational Clarity: It organizes the stylesheet in a way that reflects the HTML structure, enhancing readability and maintainability. Scoped Styling Precision: It enables developers to target elements more precisely, mitigating unintended style overrides. Practical Illustration of CSS Nesting Consider a webpage with multiple sections, each with a heading and a paragraph. Using CSS nesting, the styling can be scoped and organized as follows: CSS .section { background-color: #f0f0f0; & h2 { font-size: 2em; color: #333; } & p { color: #666; font-size: 1em; } Here, the styling for h2 and p is neatly nested within the section, ensuring clarity and specificity. Deciphering the Cascade Defining the Cascade The cascade is a pivotal algorithm in CSS responsible for determining which styling rules apply when multiple rules are in conflict. It employs a set of principles, including importance, specificity, source order, and inheritance, to arbitrate competing rules. The Significance of the Cascade Conflict Resolution: It systematically resolves conflicting styling rules, ensuring uniformity in style application. Enhanced Predictability: It allows developers to forecast and control how styles are applied, establishing a coherent styling environment. Understanding the Cascade through Examples Suppose we have three conflicting CSS rules with differing levels of specificity and importance: HTML /* Rule 1 - Class Selector */ .text { color: red; } /* Rule 2 - ID Selector */ #content p { color: blue; } /* Rule 3 - Inline Style */ <p style="color: green;">Hello World!</p> In this scenario, the cascade will follow these steps: 1. Importance: The inline style (Rule 3) is considered more important than external or internal styles. 2. Specificity: If the importance is the same, the cascade looks at specificity. ID selectors (#) have higher specificity than class selectors (.) and type selectors (e.g., p). 3. Source Order: If both importance and specificity are the same, the last declared style will be applied. Therefore, the paragraph text will be rendered in green due to the inline style (Rule 3). Integrating Nesting With the Cascade The Interplay of Nesting and Cascade The intertwining of nesting and the cascade adds another layer of complexity. Nesting inherently escalates specificity, potentially causing unintended style overrides and conflicts within the cascade. Concrete Best Practices and Examples Maintain Shallow Nesting: CSS /* Recommended */ section article { color: blue; } /* Avoid */ body section article div p { color: red; } The first example maintains a shallow nesting level, reducing specificity and complexity, while the second example demonstrates deep nesting, leading to increased specificity and potential conflicts. Strategize Specificity: CSS /* Specific Selector */ #header .navigation { background-color: #f0f0f0; } /* General Selector */ .navigation { background-color: #ffffff; } The specific selector will take precedence due to higher specificity, rendering the background color of .navigation as #f0f0f0. Prioritize Simplicity: CSS /* Simple */ .button { background-color: yellow; } /* Complex */ div.container > div.content + button.button { background-color: yellow; } The first example is simple and easy to manage, while the second example, being more complex, could lead to maintenance challenges and unintended styling issues. Conclusion Understanding and implementing the cascade and CSS nesting principles are integral for designing seamless and appealing web interfaces. By cultivating a balanced approach to nesting and specificity and by navigating the cascade wisely, developers can mitigate styling conflicts and enhance the coherence and maintainability of their stylesheets, delivering optimal and consistent user experiences across the web.
When people think of linting, the first thing that comes to mind is usually static code analysis for programming languages, but rarely for markup languages. In this article, I would like to share how our team developed ZK Client MVVM Linter, an XML linter that automates migration assessment for our new Client MVVM feature in the upcoming ZK 10 release. The basic idea is to compile a catalog of known compatibility issues as lint rules to allow users to assess the potential issues flagged by the linter before committing to the migration. For those unfamiliar with ZK, ZK is a Java framework for building enterprise applications; ZUL (ZK User Interface Markup Language) is its XML-based language for simplifying user interface creation. Through sharing our experience developing ZK Client MVVM Linter, we hope XML linters can find broader applications. File Parsing The Problem Like other popular linters, our ZUL linter starts by parsing source code into AST (abstract syntax tree). Although Java provides several libraries for XML parsing, they lose the original line and column numbers of elements in the parsing process. As the subsequent analysis stage will need this positional information to report compatibility issues precisely, our first task is to find a way to obtain and store the original line and column numbers in AST. How We Address This After exploring different online sources, we found a Stack Overflow solution that leverages the event-driven property of SAX Parser to store the end position of each start tag in AST. Its key observation was that the parser invokes the startElement method whenever it encounters the ending ‘>’ character. Therefore, the parser position returned by the locator must be equivalent to the end position of the start tag, making the startElement method the perfect opportunity for creating new AST nodes and storing their end positions. Java public static Document parse(File file) throws Exception { Document document = DocumentBuilderFactory.newInstance().newDocumentBuilder().newDocument(); SAXParser parser = SAXParserFactory.newInstance().newSAXParser(); parser.parse(file, new DefaultHandler() { private Locator _locator; private final Stack<Node> _stack = new Stack<>(); @Override public void setDocumentLocator(Locator locator) { _locator = locator; _stack.push(document); } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) { // Create a new AST node Element element = document.createElement(qName); for (int i = 0; i < attributes.getLength(); i++) element.setAttribute(attributes.getQName(i), attributes.getValue(i)); // Store its end position int lineNumber = _locator.getLineNumber(), columnNumber = _locator.getColumnNumber(); element.setUserData("position", lineNumber + ":" + columnNumber, null); _stack.push(element); } @Override public void endElement(String uri, String localName, String qName) { Node element = _stack.pop(); _stack.peek().appendChild(element); } }); return document; } Building on the solution above, we implemented a more sophisticated parser capable of storing the position of each attribute. Our parser uses the end positions returned by the locator as reference points to reduce the task into finding attribute positions relative to the end position. Initially, we started with a simple idea of iteratively finding and removing the last occurrence of each attribute-value pair from the buffer. For example, if <elem attr1="value" attr2="value"> ends at 3:34 (line 3: column 34), our parser will perform the following steps: Plain Text Initialize buffer = <elem attr1="value" attr2="value"> Find buffer.lastIndexOf("value") = 28 → Update buffer = <elem attr1="value" attr2=" Find buffer.lastIndexOf("attr2") = 21 → Update buffer = <elem attr1="value" Find buffer.lastIndexOf("value") = 14 → Update buffer = <elem attr1=" Find buffer.lastIndexOf("attr1") = 7 → Update buffer = <elem From steps 3 and 6, we can conclude that attr1 and attr2 start at 3:7 and 3:21, respectively. Then, we further improved the mechanism to handle other formatting variations, such as a single start tag across multiple lines and multiple start tags on a single line, by introducing the start index and leading space stack to store the buffer indices where new lines start and the number of leading spaces of each line. For example, if there is a start tag that starts from line 1 and ends at 3:20 (line 3: column 20): XML <elem attr1="value across 2 lines" attr2 = "value"> Our parser will perform the following steps: Plain Text Initialize buffer = <elem attr1="value across 2 lines" attr2 = "value"> Initialize startIndexes = [0, 19, 35] and leadingSpaces = [0, 4, 4] Find buffer.lastIndexOf("value") = 45 Find buffer.lastIndexOf("attr2") = 36 → lineNumber = 3, startIndexes = [0, 19, 35] and leadingSpaces = [0, 4, 4] → columnNumber = 36 - startIndexes.peek() + leadingSpaces.peek() = 5 Find buffer.lastIndexOf("value across 2 lines") = 14 Find buffer.lastIndexOf("attr1") = 7 → Update lineNumber = 1, startIndexes = [0], and leadingSpaces = [0] → columnNumber = 7 - startIndexes.peek() + leadingSpaces.peek() = 7 From steps 4 and 8, we can conclude that attr1 and attr2 start at 1:7 and 3:5, respectively. As a result of the code provided below: Java public void startElement(String uri, String localName, String qName, Attributes attributes) { // initialize buffer, startIndexes, and leadingSpaces int endLineNumber = _locator.getLineNumber(), endColNumber = _locator.getColumnNumber(); for (int i = 0; _readerLineNumber <= endLineNumber; i++, _readerLineNumber++) { startIndexes.push(buffer.length()); if (i > 0) _readerCurrentLine = _reader.readLine(); buffer.append(' ').append((_readerLineNumber < endLineNumber ? _readerCurrentLine : _readerCurrentLine.substring(0, endColNumber - 1)).stripLeading()); leadingSpaces.push(countLeadingSpaces(_readerCurrentLine)); } _readerLineNumber--; // recover attribute positions int lineNumber = endLineNumber, columnNumber; Element element = document.createElement(qName); for (int i = attributes.getLength() - 1; i >= 0; i--) { String[] words = attributes.getValue(i).split("\\s+"); for (int j = words.length - 1; j >= 0; j--) buffer.delete(buffer.lastIndexOf(words[j]), buffer.length()); buffer.delete(buffer.lastIndexOf(attributes.getQName(i)), buffer.length()); while (buffer.length() < startIndexes.peek()) { lineNumber--; leadingSpaces.pop(); startIndexes.pop(); } columnNumber = leadingSpaces.peek() + buffer.length() - startIndexes.peek(); Attr attr = document.createAttribute(attributes.getQName(i)); attr.setUserData("position", lineNumber + ":" + columnNumber, null); element.setAttributeNode(attr); } // recover element position buffer.delete(buffer.lastIndexOf(element.getTagName()), buffer.length()); while (buffer.length() < startIndexes.peek()) { lineNumber--; leadingSpaces.pop(); startIndexes.pop(); } columnNumber = leadingSpaces.peek() + buffer.length() - startIndexes.peek(); element.setUserData("position", lineNumber + ":" + columnNumber, null); _stack.push(element); } File Analysis Now that we have a parser that converts ZUL files into ASTs, we are ready to move on to the file analysis stage. Our ZulFileVisitor class encapsulates the AST traversal logic and delegates the responsibility of implementing specific checking mechanisms to its subclasses. This design allows lint rules to be easily created by extending the ZulFileVisitor class and overriding the visit method for the node type the lint rule needs to inspect. Java public class ZulFileVisitor { private Stack<Element> _currentPath = new Stack<>(); protected void report(Node node, String message) { System.err.println(node.getUserData("position") + " " + message); } protected void visit(Node node) { if (node.getNodeType() == Node.ELEMENT_NODE) { Element element = (Element) node; _currentPath.push(element); visitElement(element); NamedNodeMap attributes = element.getAttributes(); for (int i = 0; i < attributes.getLength(); i++) visitAttribute((Attr) attributes.item(i)); } NodeList children = node.getChildNodes(); for (int i = 0; i < children.getLength(); i++) visit(children.item(i)); if (node.getNodeType() == Node.ELEMENT_NODE) _currentPath.pop(); } protected void visitAttribute(Attr node) {} protected void visitElement(Element node) {} } Conclusion The Benefits For simple lint rules such as "row elements not supported," developing an XML linter may seem like an overkill when manual checks would suffice. However, as the codebase expands or the number of lint rules increases over time, the advantages of linting will quickly become noticeable compared to manual checks, which are both time-consuming and prone to human errors. Java class SimpleRule extends ZulFileVisitor { @Override protected void visitElement(Element node) { if ("row".equals(node.getTagName())) report(node, "`row` not supported"); } } On the other hand, complicated rules involving ancestor elements are where XML linters truly shine. Consider a lint rule that only applies to elements inside certain ancestor elements, such as "row elements not supported outside rows elements," our linter would be able to efficiently identify the infinite number of variations that satisfy the rule, which cannot be done manually or with a simple file search. Java class ComplexRule extends ZulFileVisitor { @Override protected void visitElement(Element node) { if ("row".equals(node.getTagName())) { boolean outsideRows = getCurrentPath().stream() .noneMatch(element -> "rows".equals(element.getTagName())); if (outsideRows) report(node, "`row` not supported outside `rows`"); } } } Now It's Your Turn Despite XML linting not being widely adopted in the software industry, we hope our ZK Client MVVM Linter, which helps us to automate migration assessment, will be able to show the benefits of XML linting or even help you to develop your own XML linter.
Python is a versatile and user-friendly programming language that has taken the coding world by storm. Its simplistic syntax and extensive libraries have made it the go-to choice for both beginners and experienced developers. Python’s popularity can be attributed not only to its technical merits but also to its unique guiding principles known as the “Zen of Python.” The “Zen of Python” is a collection of aphorisms that encapsulate the guiding philosophy for writing code in Python. Authored by Tim Peters, a renowned software developer and one of the earliest contributors to Python, these principles were first introduced in a mysterious way. In the early days of Python, the Zen of Python was included as an Easter egg in the language. To reveal it, users had to type import thisin the Python interpreter. This enigmatic and playful approach added an element of curiosity to the already thriving Python community, leading to much speculation about its origins. The Zen of Python was finally revealed in its entirety, and it has since become a cherished set of guidelines for Python developers. It captures the essence of Python’s design philosophy, emphasizing simplicity, elegance, and readability as paramount virtues in code. What Is Zen? The term “Zen” originates from Zen Buddhism, a school of Mahayana Buddhism that emphasizes meditation, intuition, and a direct understanding of reality. In the context of Python, the Zen of Python embodies a similar philosophy, promoting simplicity, clarity, and harmony in code. The Zen of Python serves as a guiding compass for Python developers, providing a set of aphorisms that encourage a mindful and thoughtful approach to coding. By adhering to these principles, Python programmers strive to create code that not only accomplishes its intended purpose but also conveys a sense of elegance and beauty. In the sections that follow, we will explore each of the Zen of Python principles in detail, providing code examples to illustrate their practical application. By the end of this blog post, you will have a deeper understanding of how to harness the power of Zen in your Python code, propelling you towards becoming a more proficient and mindful Python developer. So, let’s embark on this enlightening quest to unravel the Zen of Python! 1. Beautiful Is Better Than Ugly Python places a strong emphasis on code aesthetics. Writing beautiful code not only enhances its visual appeal but also makes it easier for others to comprehend and collaborate. As a beginner, take pride in crafting clean and well-structured code, and you’ll quickly appreciate the artistry in Python programming. Python # Non-Pythonic code def add_numbers(a, b): return a + b # Pythonic code def add_numbers(a, b): return a + b 2. Explicit Is Better Than Implicit In Python, as in life, clarity is essential. Be explicit in your code, leaving no room for ambiguity. Clearly state your intentions, define variables explicitly, and avoid magic numbers or hidden behaviors. This approach ensures that your code is easy to understand and less prone to errors. Python # Implicit def calculate_area(radius): return 3.14 * radius ** 2 # Explicit from math import pi def calculate_area(radius): return pi * radius ** 2 3. Simple Is Better Than Complex Simplicity is a core principle of Python. Embrace straightforward solutions to solve problems, avoiding unnecessary complexities. As a beginner, you’ll find that Python’s simplicity empowers you to express ideas more effectively and efficiently. Python # Complex code def factorial(n): if n == 0: return 1 else: return n * factorial(n - 1) # Simple code def factorial(n): result = 1 for i in range(1, n + 1): result *= i return result 4. Complex Is Better Than Complicated Wait! Didn’t we just say, “Simple is better than complex”!? While simplicity is favored, complex problems may require complex solutions. However, complexity should not equate to obscurity. Aim to create clear and well-structured code, even when dealing with intricate challenges. Python # Complicated def calculate_fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: a, b = 0, 1 for i in range(n - 1): a, b = b, a + b return b # Complex but not complicated def calculate_fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return calculate_fibonacci(n - 1) + calculate_fibonacci(n - 2) 5. Flat Is Better Than Nested Code readability and simplicity play pivotal roles in effective programming. Instead of delving into intricate layers of indentation and nested structures, break down complex tasks into manageable components. The result is code that is more comprehensible, maintainable, and easier to collaborate on. Python # Nested def process_data(data): result = [] for item in data: if item > 0: if item % 2 == 0: result.append(item) return result # Flat def process_data(data): result = [] for item in data: if item > 0 and item % 2 == 0: result.append(item) return result 6. Sparse Is Better Than Dense It’s tempting to pack a lot of functionality into a single line, and it might impress your non-coder friends, but it definitely won’t impress your colleagues. Spread out code is easier to understand, and don’t forget beautiful is better than ugly. Python # Dense def process_data(data):result=[];for item in data:if item>0:result.append(item);return result # Sparse def process_data(data): result = [] for item in data: if item > 0: result.append(item) return result 7. Readability Counts As developers, we read code more often than we write it. Therefore, prioritize code readability by adhering to consistent formatting, using meaningful variable names, and following Python’s style guide (PEP 8). Readable code is easier to maintain, debug, and collaborate on. Python # Hard to read def p(d): r = [] for i in d: if i > 0: r.append(i) return r # Easy to read def process_data(data): result = [] for item in data: if item > 0: result.append(item) return result 8. Special Cases Aren’t Special Enough to Break the Rules There are well-defined best practices and rules when it comes to writing good Python code. Stick to them. 9. Although Practicality Beats Purity This contradicts #8. Well, kind of. If your code is easier to follow but deviates from the earlier principle, go for it! Beginners have a tough time finding the balance between practicality and purity, but don’t worry; it gets better with experience. 10. Errors Should Never Pass Silently Always opt for comprehensive error handling. By catching and handling errors, you can prevent unexpected behavior and make your code more robust. Python # Errors pass silently def divide(a, b): try: return a / b except: pass # Errors are handled explicitly def divide(a, b): try: return a / b except ZeroDivisionError: print("Error: Cannot divide by zero.") 11. Unless Explicitly Silenced In certain scenarios, there might be a deliberate decision to overlook potential errors triggered by your program. In such cases, the recommended approach is to purposefully suppress these errors by addressing them explicitly within your code. Python # Explicitly silence exceptions def divide(a, b): try: return a / b except ZeroDivisionError: pass 12. In the Face of Ambiguity, Refuse the Temptation To Guess Don’t attempt random solutions with the singular purpose of making your code “work.” Instead, dig deeper into the problem, debug, and understand what is causing it. Then, and only then, work on fixing it. Don’t guess! 13. There Should Be One — And Preferably Only One — Obvious Way To Do It Python’s motto encourages consistency and standardization. Whenever possible, use the idiomatic Pythonic way to solve problems. This uniformity simplifies code reviews, enhances collaboration, and ensures a smooth experience for fellow developers. 14. Although That Way May Not Be Obvious at First Unless You’re Dutch Tim Peters introduces a dose of humor that captures the essence of this principle. In a playful nod to Guido van Rossum, the creator of Python, Peters quips that only the “Zen Master” himself might possess an encyclopedic knowledge of all the seemingly obvious ways to implement solutions in Python. 15. Now Is Better Than Never Don’t hesitate to start coding! Embrace the “now” mindset and dive into projects. As a beginner, getting hands-on experience is crucial for learning and improving your coding skills. 16. Although Never Is Often Better Than Right Now While being proactive is important, rushing into coding without a clear plan can lead to suboptimal solutions and increased technical debt. Take the time to understand the problem and plan your approach before diving into writing code 17. If the Implementation Is Hard To Explain, It’s a Bad Idea Strive for code that is easy to explain and understand. If you find yourself struggling to articulate your code’s logic, it might be a sign that it needs simplification. Aim for code that communicates its purpose effectively. Python # Hard to explain implementation (using bitwise operators) def is_even(n): return n & 1 == 0 # Easy to explain implementation (using modulo operator) def is_even(n): return n % 2 == 0 18. If the Implementation Is Easy To Explain, It May Be a Good Idea Conversely, if your code’s logic is evident and easy to explain, you’re on the right track. Python’s elegance lies in its ability to make complex tasks appear simple. 19. Namespaces Are One Honking Great Idea — Let’s Do More of Those! Python’s use of namespaces allows for organizing code logically and reducing naming conflicts. Embrace modular design and create reusable functions and classes, keeping your codebase organized and maintainable. However, also keep in mind that flat is better than nested. It is important to strike the right balance. Python # Without namespaces def calculate_square_area(side_length): return side_length * side_length def calculate_circle_area(radius): return 3.14 * radius * radius square_area = calculate_square_area(4) circle_area = calculate_circle_area(3) # With namespaces class Geometry: @staticmethod def calculate_square_area(side_length): return side_length * side_length @staticmethod def calculate_circle_area(radius): return 3.14 * radius * radius square_area = Geometry.calculate_square_area(4) circle_area = Geometry.calculate_circle_area(3) By using namespaces, we group related functions, classes, or variables under a common namespace, minimizing the chances of naming collisions and enhancing code organization. Conclusion As a Python beginner, understanding and embracing the Zen of Python will set you on the path to becoming a proficient developer. Write code that is beautiful, explicit, and simple while prioritizing readability and adhering to Pythonic conventions. With practice and dedication, you’ll soon find yourself crafting elegant and efficient solutions to real-world problems. The Zen of Python will be your guiding light, empowering you to write code that not only works but also inspires and delights. Happy coding, and welcome to the Python community! As you embark on your Python journey, remember to seek inspiration from the Zen of Python. Let its principles guide you in creating code that not only functions as intended but also reflects the artistry and elegance of Pythonic programming. The adventure awaits — embrace the Zen and unlock the true potential of Python!
Programming languages are the building blocks of software development, enabling developers to create applications, websites, and other digital solutions. The choice of programming language can greatly impact the efficiency, scalability, and functionality of a project. In this guide, we will explore the top 10 programming languages for software development, highlighting their strengths, use cases, and popularity within the tech industry. 1. Python Python is renowned for its simplicity and readability, making it an ideal choice for both beginners and experienced developers. Its versatile nature allows developers to create web applications, data analysis tools, artificial intelligence (AI) algorithms, and more. Python's vast library ecosystem, including frameworks like Django and Flask, accelerates development by providing pre-built components. 2. JavaScript JavaScript is a dynamic scripting language primarily used for web development. It enables interactive and responsive user interfaces, making it essential for front-end web development. With Node.js, JavaScript also empowers server-side development. Popular libraries and frameworks like React, Angular, and Vue.js enhance its capabilities for creating modern web applications. 3. Java Java's "write once, run anywhere" philosophy has made it a staple in enterprise-level applications and Android app development. It's known for its strong community support, extensive libraries, and platform independence. Java's object-oriented approach and robustness contribute to its use in building large-scale applications and systems. 4. C# C# (C Sharp) is a language developed by Microsoft, primarily used for developing Windows applications, games with Unity game engine, and web applications with ASP.NET. Its integration with the .NET framework provides a powerful environment for creating a wide range of software solutions, from desktop applications to cloud services. 5. C++ C++ builds upon the foundation of C and adds features like object-oriented programming. It is commonly used for system-level programming, game development, and performance-critical applications. Its efficient memory management and performance optimization capabilities make it a favorite for tasks where speed is crucial. 6. Ruby Ruby's elegant syntax and focus on developer happiness have led to its popularity in web development. The Ruby on Rails framework, often referred to as Rails, revolutionized the way web applications are built, promoting convention over configuration and emphasizing rapid development. 7. Swift Swift is Apple's modern programming language designed for building iOS, macOS, watchOS, and tvOS applications. With a clean syntax and robust features, Swift simplifies app development, increases performance, and enhances security compared to its predecessor, Objective-C. 8. PHP PHP (Hypertext Preprocessor) is a server-side scripting language widely used for web development. It's particularly suited for creating dynamic web pages and web applications. Popular content management systems like WordPress and e-commerce platforms like Magento are built on PHP. 9. TypeScript TypeScript is a superset of JavaScript that introduces static typing and other features to enhance developer productivity and maintainability. Developed by Microsoft, TypeScript is often used in large-scale applications to catch type-related errors during development and improve code quality. 10. Go (Golang) Go, also known as Golang, is a statically typed language developed by Google. It focuses on simplicity, efficiency, and performance, making it a great choice for building scalable and concurrent applications. Go's built-in support for concurrent programming through goroutines and channels sets it apart in the development landscape. Conclusion The world of software development is rich with a variety of programming languages, each tailored to different needs and applications. The top 10 programming languages covered in this guide offer a diverse range of tools and capabilities to create anything from simple web applications to complex systems and artificial intelligence algorithms. When selecting a programming language for your project, consider factors such as your project's requirements, your team's expertise, and the community support surrounding the language. By making an informed choice, you can ensure the success and efficiency of your software development endeavors.
OpenAI’s GPT has emerged as the foremost AI tool globally and is proficient at addressing queries based on its training data. However, it can not answer questions about unknown topics: Recent events after Sep 2021 Your non-public documents Information from past conversations This task gets even more complicated when you deal with real-time data that frequently changes. Moreover, you cannot feed extensive content to GPT, nor can it retain your data over extended periods. In this case, you need to build a custom LLM (Language Learning Model) app efficiently to give context to the answer process. This piece will walk you through the steps to develop such an application utilizing the open-source LLM App library in Python. The source code is on GitHub (linked below in the section "Build a ChatGPT Python API for Sales"). Learning Objectives You will learn the following throughout the article: The reason why you need to add custom data to ChatGPT How to use embeddings, prompt engineering, and ChatGPT for better question-answering Build your own ChatGPT with custom data using the LLM App Create a ChatGPT Python API for finding real-time discounts or sales prices Why Provide ChatGPT With a Custom Knowledge Base? Before jumping into the ways to enhance ChatGPT, let’s first explore the manual methods and identify their challenges. Typically, ChatGPT is expanded through prompt engineering. Assume you want to find real-time discounts/deals/coupons from various online markets. For example, when you ask ChatGPT “Can you find me discounts this week for Adidas men’s shoes?”, a standard response you can get from the ChatGPT UI interface without having custom knowledge is: As evident, GPT offers general advice on locating discounts but lacks specificity regarding where or what type of discounts, among other details. Now to help the model, we supplement it with discount information from a trustworthy data source. You must engage with ChatGPT by adding the initial document content prior to posting the actual questions. We will collect this sample data from the Amazon products deal dataset and insert only a single JSON item we have into the prompt: As you can see, you get the expected output and this is quite simple to achieve since ChatGPT is context-aware now. However, the issue with this method is that the model’s context is restricted (GPT-4 maximum text length is 8,192 tokens). This strategy will quickly become problematic when input data is huge you may expect thousands of items discovered in sales and you can not provide this large amount of data as an input message. Also, once you have collected your data, you may want to clean, format, and preprocess data to ensure data quality and relevancy. If you utilize the OpenAI Chat Completion endpoint or build custom plugins for ChatGPT, it introduces other problems as follows: Cost — By providing more detailed information and examples, the model’s performance might improve, though at a higher cost (for GPT-4 with an input of 10k tokens and an output of 200 tokens, the cost is $0.624 per prediction). Repeatedly sending identical requests can escalate costs unless a local cache system is utilized. Latency — A challenge with utilizing ChatGPT APIs for production, like those from OpenAI, is their unpredictability. There is no guarantee regarding the provision of consistent service. Security — When integrating custom plugins, every API endpoint must be specified in the OpenAPI spec for functionality. This means you’re revealing your internal API setup to ChatGPT, a risk many enterprises are skeptical of. Offline Evaluation — Conducting offline tests on code and data output or replicating the data flow locally is challenging for developers. This is because each request to the system may yield varying responses. Using Embeddings, Prompt Engineering, and ChatGPT for Question-Answering A promising approach you find on the internet is utilizing LLMs to create embeddings and then constructing your applications using these embeddings, such as for search and ask systems. In other words, instead of querying ChatGPT using the Chat Completion endpoint, you would do the following query: Given the following discounts data: {input_data}, answer this query: {user_query}. The concept is straightforward. Rather than posting a question directly, the method first creates vector embeddings through OpenAI API for each input document (text, image, CSV, PDF, or other types of data), then indexes generated embeddings for fast retrieval and stores them into a vector database and leverages the user’s question to search and obtain relevant documents from the vector database. These documents are then presented to ChatGPT along with the question as a prompt. With this added context, ChatGPT can respond as if it’s been trained on the internal dataset. On the other hand, if you use Pathway’s LLM App, you don’t need even any vector databases. It implements real-time in-memory data indexing directly reading data from any compatible storage, without having to query a vector document database that comes with costs like increased prep work, infrastructure, and complexity. Keeping source and vectors in sync is painful. Also, it is even harder if the underlined input data is changing over time and requires re-indexing. ChatGPT With Custom Data Using LLM App These simple steps below explain a data pipelining approach to building a ChatGPT app for your data with the LLM App. Collect: Your app reads the data from various data sources (CSV, JSON Lines, SQL databases, Kafka, Redpanda, Debezium, and so on) in real-time when a streaming mode is enabled with Pathway (Or you can test data ingestion in static mode too). It also maps each data row into a structured document schema for better managing large data sets. Preprocess: Optionally, you do easy data cleaning by removing duplicates, irrelevant information, and noisy data that could affect your responses’ quality and extracting the data fields you need for further processing. Also, at this stage, you can mask or hide privacy data to avoid them being sent to ChatGPT. Embed: Each document is embedded with the OpenAI API and retrieves the embedded result. Indexing: Constructs an index on the generated embeddings in real time. Search: Given a user question let’s say from an API-friendly interface, generate an embedding for the query from the OpenAI API. Using the embeddings, retrieve the vector index by relevance to the query on the fly. Ask: Insert the question and the most relevant sections into a message to GPT. Return GPT’s answer (chat completion endpoint). Build a ChatGPT Python API for Sales Once we have a clear picture of the processes of how the LLM App works in the previous section. You can follow the steps below to understand how to build a discount finder app. The project source code can be found on GitHub. If you want to quickly start using the app, you can skip this part clone the repository, and run the code sample by following the instructions in the README.md file there. Sample Project Objective Inspired by this article around enterprise search, our sample app should expose an HTTP REST API endpoint in Python to answer user queries about current sales by retrieving the latest deals from various sources (CSV, Jsonlines, API, message brokers, or databases) and leverages OpenAI API Embeddings and Chat Completion endpoints to generate AI assistant responses. Step 1: Data Collection (Custom Data Ingestion) For simplicity, we can use any JSON Lines as a data source. The app takes JSON Lines files like discounts.jsonl and uses this data when processing user queries. The data source expects to have an doc object for each line. Make sure that you convert your input data first to Jsonlines. Here is an example of a Jsonline file with a single raw: {"doc": "{'position': 1, 'link': 'https://www.amazon.com/deal/6123cc9f', 'asin': 'B00QVKOT0U', 'is_lightning_deal': False, 'deal_type': 'DEAL_OF_THE_DAY', 'is_prime_exclusive': False, 'starts_at': '2023-08-15T00:00:01.665Z', 'ends_at': '2023-08-17T14:55:01.665Z', 'type': 'multi_item', 'title': 'Deal on Crocs, DUNLOP REFINED(\u30c0\u30f3\u30ed\u30c3\u30d7\u30ea\u30d5\u30a1\u30a4\u30f3\u30c9)', 'image': 'https://m.media-amazon.com/images/I/41yFkNSlMcL.jpg', 'deal_price_lower': {'value': 35.48, 'currency': 'USD', 'symbol': '$', 'raw': '35.48'}, 'deal_price_upper': {'value': 52.14, 'currency': 'USD', 'symbol': '$', 'raw': '52.14'}, 'deal_price': 35.48, 'list_price_lower': {'value': 49.99, 'currency': 'USD', 'symbol': '$', 'raw': '49.99'}, 'list_price_upper': {'value': 59.99, 'currency': 'USD', 'symbol': '$', 'raw': '59.99'}, 'list_price': {'value': 49.99, 'currency': 'USD', 'symbol': '$', 'raw': '49.99 - 59.99', 'name': 'List Price'}, 'current_price_lower': {'value': 35.48, 'currency': 'USD', 'symbol': '$', 'raw': '35.48'}, 'current_price_upper': {'value': 52.14, 'currency': 'USD', 'symbol': '$', 'raw': '52.14'}, 'current_price': {'value': 35.48, 'currency': 'USD', 'symbol': '$', 'raw': '35.48 - 52.14', 'name': 'Current Price'}, 'merchant_name': 'Amazon Japan', 'free_shipping': False, 'is_prime': False, 'is_map': False, 'deal_id': '6123cc9f', 'seller_id': 'A3GZEOQINOCL0Y', 'description': 'Deal on Crocs, DUNLOP REFINED(\u30c0\u30f3\u30ed\u30c3\u30d7\u30ea\u30d5\u30a1\u30a4\u30f3\u30c9)', 'rating': 4.72, 'ratings_total': 6766, 'page': 1, 'old_price': 49.99, 'currency': 'USD'}"} The cool part is that the app is always aware of changes in the data folder. If you add another JSON Lines file, the LLM app does magic and automatically updates the AI model’s response. Step 2: Data Loading and Mapping With Pathway’s JSON Lines input connector, we will read the local JSONlines file, map data entries into a schema, and create a Pathway Table. See the full source code in app.py: ... sales_data = pw.io.jsonlines.read( "./examples/data", schema=DataInputSchema, mode="streaming" ) Map each data row into a structured document schema. See the full source code in app.py: class DataInputSchema(pw.Schema): doc: str Step 3: Data Embedding Each document is embedded with the OpenAI API and retrieves the embedded result. See the full source code in embedder.py: ... embedded_data = embeddings(context=sales_data, data_to_embed=sales_data.doc) Step 4: Data Indexing Then we construct an instant index on the generated embeddings: index = index_embeddings(embedded_data) Step 5: User Query Processing and Indexing We create a REST endpoint, take a user query from the API request payload, and embed the user query with the OpenAI API. ... query, response_writer = pw.io.http.rest_connector( host=host, port=port, schema=QueryInputSchema, autocommit_duration_ms=50, ) embedded_query = embeddings(context=query, data_to_embed=pw.this.query) Step 6: Similarity Search and Prompt Engineering We perform a similarity search by using the index to identify the most relevant matches for the query embedding. Then we build a prompt that merges the user’s query with the fetched relevant data results and send the message to the ChatGPT Completion endpoint to produce a proper and detailed response. responses = prompt(index, embedded_query, pw.this.query) We followed the same in-context learning approach when we crafted the prompt and added internal knowledge to ChatGPT in the prompt.py. prompt = f"Given the following discounts data: \\n {docs_str} \\nanswer this query: {query}" Step 7: Return the Response The final step is just to return the API response to the user. # Build prompt using indexed data responses = prompt(index, embedded_query, pw.this.query) Step 9: Put Everything Together Now if we put all the above steps together, you have LLM-enabled Python API for custom discount data ready to use as you see the implementation in the app.py Python script. import pathway as pw from common.embedder import embeddings, index_embeddings from common.prompt import prompt def run(host, port): # Given a user question as a query from your API query, response_writer = pw.io.http.rest_connector( host=host, port=port, schema=QueryInputSchema, autocommit_duration_ms=50, ) # Real-time data coming from external data sources such as jsonlines file sales_data = pw.io.jsonlines.read( "./examples/data", schema=DataInputSchema, mode="streaming" ) # Compute embeddings for each document using the OpenAI Embeddings API embedded_data = embeddings(context=sales_data, data_to_embed=sales_data.doc) # Construct an index on the generated embeddings in real-time index = index_embeddings(embedded_data) # Generate embeddings for the query from the OpenAI Embeddings API embedded_query = embeddings(context=query, data_to_embed=pw.this.query) # Build prompt using indexed data responses = prompt(index, embedded_query, pw.this.query) # Feed the prompt to ChatGPT and obtain the generated answer. response_writer(responses) # Run the pipeline pw.run() class DataInputSchema(pw.Schema): doc: str class QueryInputSchema(pw.Schema): query: str (Optional) Step 10: Add an Interactive UI To make your app more interactive and user-friendly, you can use Streamlit to build a front-end app. See the implementation in this app.py file. Running the App Follow the instructions in the README.md (linked earlier) file’s "How to run the project" section and you can start to ask questions about discounts, and the API will respond according to the discounts data source you have added. After we give this knowledge to GPT using UI (applying a data source), look how it replies: The app takes both Rainforest API and discounts.csv file documents into account (merges data from these sources instantly.), indexes them in real-time, and uses this data when processing queries. Further Improvements We’ve only discovered a few capabilities of the LLM App by adding domain-specific knowledge like discounts to ChatGPT. There are more things you can achieve: Incorporate additional data from external APIs, along with various files (such as Jsonlines, PDF, Doc, HTML, or Text format), databases like PostgreSQL or MySQL, and stream data from platforms like Kafka, Redpanda, or Debedizum. Maintain a data snapshot to observe variations in sales prices over time, as Pathway provides a built-in feature to compute differences between two alterations. Beyond making data accessible via API, the LLM App allows you to relay processed data to other downstream connectors, such as BI and analytics tools. For instance, set it up to receive alerts upon detecting price shifts.
The days are long gone when a viewer’s attention is quickly captured by a simple and plain HTML website. The trend has changed and moved more towards animation and graphics with several upgrades to technology and design. When you develop a website, it requires both creative and technical skills. Things like layouts, animations, and graphics can greatly overhaul your website’s look and feel. As you already know, Cascading Style Sheets, or CSS, is an ideal way to spice up your web design. It is a fundamental technology that allows developers to control their websites’ or web apps’ layout and visual appearance. According to W3Techs, as of January 2023, approximately 96.9% of websites use CSS, which itself shows it is an integral part of modern web design. As web development evolves, new CSS trends are emerging to help developers create more visually stunning websites. From responsive design and animation to new techniques and styling, CSS trends are constantly changing and adapting to meet the needs of modern web design. Since CSS trends are cyclical, it’s reasonable to assume that by 2023, there will be some new CSS trends in web development. In this article, discover the best 15 CSS trends to look for in 2023. These trends will help you create visually stunning responsive designs by unleashing the power of CSS. How Does CSS Help Your Website? CSS stands for Cascading Style Sheet. It is a language for creating a Style Sheet that describes the layout and formatting of a document written in a markup language. It works with HTML to modify the look and feel of online pages and user interfaces. Any XML document type, including plain XML, SVG, and XUL, can be used with it. With the help of CSS, you can make changes to old HTML-written documents or create a new style with the CSS codes. Here are some benefits CSS offers to your website. Before tags, such as font, color, background, etc., were repetitive in websites, and CSS was developed to solve this problem. Help you create a consistent design across multiple web pages and offers reusability to use styling on different elements and websites. CSS offers more specific attributes than plain HTML to define the website’s look and feel. Provide visual cues to improve the website’s accessibility. Boost website SEO by presenting the digital content clearly and concisely. 2023 CSS Trends To Follow Now you have got a gist of CSS and its benefits, let’s start with our list of best CSS trends for 2023. Note: The browser compatibility data herein have been taken from CanIUse. 1. CSS Grid CSS Grid is a powerful layout module that allows you to create sophisticated, responsive grid layouts. It’s fully supported by modern browsers and is gaining popularity among web developers. This amazing CSS trend can handle both rows or columns easily. Subgrid is a handy feature that has been added to the Grid Layout. You can create a Subgrid using the Subgrid feature that will mimic the layout of its parent grid. The child grid chooses its dimensions and gaps when nested inside another grid display. The layout of the parent grid is applied to the Subgrid, although the Subgrid can still override certain parts if necessary. Browser Support: 95.91% 2. CSS Writing Mode Depending on the language, the CSS Writing Mode property adjusts the text’s alignment so that it can be read either from top to bottom or from left to right. Say, for instance, that we wish to add some text that is read from left to right and from top to bottom. This is helpful for languages where the text is frequently positioned vertically, like Chinese, Japanese, or Korean. You’ll likely want to employ this quality in English for aesthetic reasons with the help of this CSS trend. Browser Support: 97.7% 3. Scroll Snap Behavior To control a web browser’s CSS scroll snap behavior, CSS offers a valuable collection of attributes. Some of this functionality has been there for extended, but more recent browser versions are just now getting access to others. The best thing about this CSS trend is that just one-third of CSS users know about it. Using the scroll-snap-type property, you can modify the scroll position on a container in various ways. Developers gain greater precision while end users enjoy a smoother, more controllable user experience. Browser Support: 95.89% 4. Container Queries CSS has not yet fully established container queries, though they will. They’ll have a significant influence on how we perceive responsive design. The fundamental notion is that you can specify a breakpoint depending on the size of a parent container in addition to the viewport and media. It will include adjusting a layout based on the dimensions of various containers that appear throughout the nested layers of a user interface. Rather than a CSS trend, CSS Container Queries is a significant move that will probably spark a wave of UI enhancements. Browser Support: 76.94% 5. New Color Palettes CSS practitioners are already using RGB to beautify web pages. Recently, CSS introduced three new color pallets: HWB, LAB, and LCH. HWB: It is an acronym for Hue, Whiteness, and Blackness. It’s an easy feature for people to read: you choose a color and then add white and black. Recent releases of Chrome, Firefox, and Safari all support it. Browser Support: 87.71% LAB: It is created from CIA LAB color theory and is considered the most theoretically complex of new color spaces. It is a bold claim that the LAB color descriptor includes all colors humans can perceive. Only Safari is now compatible with this CSS trend, just like LCH. LCH: It stands for Lightness, Chroma, and Hue and is renowned for broadening the palette of colors that are accessible. Safari only supports LCH. Browser Support: 15.38% 6. CSS Variables CSS Variables, also known as CSS Custom Properties, has been a popular CSS trend in the market since 2015 and are now getting more and more attention from CSS users. CSS Variables allow you to store and use a value elsewhere in the HTML code. It helps to remove redundancy in codes, flexibility, and improve the readability of codes. Browser Support: 95.81% 7. Viewport Units Setting viewport units is a hassle for everyone who has attempted to code a website for Safari on iOS. The mobile browser shows containers set to a size in the unit vh as being smaller than they should be. You need to use a script that automatically resizes the container to get around this bug. Other than the inconvenience of loading a new script, some workarounds harm Chrome users. Thank goodness CSS now supports new relative lengths and viewport specifications. A few of these are “vw”, “svw”, “lvw”, and “dvw”. These measurements are 1% of the width of the small, large, and dynamic viewport sizes and the UA-default viewport size. Browser Support: 97.53% 8. Cascade Layers If the next element in the cascade has a greater level of specificity, CSS overrides style changes to the first element. Due to the vast codebase, this problem is always present in large projects. Here, CSS Cascade Layers come in. Cascade Layers give developers better flexibility over themes, frameworks, and designs to utilize the cascading system fully. Cascade Layers provide direct manipulation and administration of the underlying cascade logic, in contrast to the original cascading centered around heuristics. This CSS trend will ensure that the components won’t always adhere to the base styles by adding a second layer to the cascade to define style variants. Instead, components are produced in accordance with the rules written on the layer and the established hierarchy of the layers. Browser Support: 87.57% 9. Content Visibility Content Visibility property in CSS helps to speed up the rendering of content on the web page so users can interact with the content while the rest of the page is loading. With the help of this property, developers can command browsers which part of the page has isolated content. In return, it helps browsers to optimize web page content with delayed calculation. Content Visibility is dependent on the CSS Containment Spec’s primitives. So far, only Chromium 85 supports content-visibility property; however, the CSS Containment Spec is supported on all major browsers. Browser Support: 71.40% 10. Gap Gap property is an emerging CSS trend that helps to define a gap between a row and a column, formally known as a grid gap. It serves as an alternative for the following characteristics. Row-gap Column-gap We utilize the gap attribute with a single value to indicate the same space between rows and columns. If there is a difference in the distance between the rows and the columns, we utilize the gap function with two values, first defining the distance between the rows and then the columns. You can utilize two properties, row-gap and column-gap, to make the code more transparent and understandable. Before the gap property, the designer needs to use the margin property with certain limitations, such as adding an indent between the element and the edge of the container. In contrast, the gap attribute allows you to specify the indentation between items without using such hacks and gimmicks and instead merely relying on the language’s fundamental constructs. Browser Support: 93.29% 11. Object View Box Another CSS trend in our list is the object-view-box property. It enables a web page only to show the designated area of an image or video. It has a result that is roughly comparable to the viewBox SVG attribute. The object-view-box property will come in handy when you only show a piece of an image or video for distinct elements or at different resolutions. Additionally, it can be used to pan and zoom pictures and movies. Before the object-view-box property, cropping problems with images or videos had to be solved by placing and resizing the content inside a wrapper element with the “overflow: hidden;” attribute. It can be done by adding the top, bottom, left, and right values within the code. Browser Support: 66.99% 12. Inset The Inset property helps to set the distance between the element and the parent element. It replaces the four properties: Top, Right, Left, and Bottom, and allows you to see the inset of the elements from all four sides in a single command. CSS Inset property requires adding all four commands for positioning. Browser Support: 90.29% 13. Variable Fonts Variable Fonts allow many variations of a typeface to be integrated into a single file Instead of having a separate font file for each width, weight, or style. It is an evolved version of the OpenType font specification. Although Variable Fonts can be used just like regular ones, they have much more to offer. The font-weight property for standard fonts accepts values from 100 to 900, while for Variable Fonts, it accepts any integer between 1 and 999. While the font-style property for regular fonts accepts two values for normal and italic, for variable fonts, you can specify an oblique angle ranging from -90 degrees to 90 degrees for variable fonts. Variable Fonts have a font-stretch feature that ranges from 50% (for narrow typefaces) to 200% (for broad typefaces), where the standard proportion is 100%. The font-optical-sizing attribute, which alters a font’s appearance based on size, is another. Browser Support: 94.89% 14. Text Overflow In CSS, the text-overflow property is used to indicate that specific text has overflowed and is now hidden. When you add this property, overflowed content will be trimmed, and a custom string or ellipsis will get visible on display. One thing to keep in mind while using the text-overflow property is white space property must be nowrap and the overflow property set for hidden. Browser Support: 98.95% 15. Comparison Functions Comparison functions are used to build a responsive website with fewer codes. It has functions such as “clamp(),” “min(),” and “max()” used to define upper- and lower-bound values, compute and compare the values of the inputs supplied to the function, and then apply the calculated value to the property. clamp() function: This function requires three parameters: a central, preferred, and maximum value. clamp() compute the value of a property based on central value. The central value will be applied to the element if the calculated value is between the lowest and maximum. The minimum or maximum will be used if the estimated value is below the minimum or exceeds the maximum. min() and max() function: The min() determines and applies the value from the range that is the smallest. Similarly, The max() function determines and applies the greatest value from the range of values given. Browser Support: 92.26% How To Test CSS Properties for Browser Compatibility? As the CSS library is launching new features and properties, it brings new daily challenges for web developers to make website browsers compatible. It is essential to check that every CSS property you use for your website is working and supported in every browser. Conclusion These are just a few CSS trends we will likely see in 2023. While other trends may also arise, following these should help newcomers. The transition to multi-column layouts is already in full swing, and the move toward responsive interfaces will speed up rapidly as we move into 2023 and beyond. We’ve only highlighted the best of the top CSS trends here, but don’t be surprised if others emerge as we get further into the next decade. No matter what happens, one thing is certain: CSS will never go out of style. Designers may change their opinions, but they will never disappear entirely—and that’s a good thing!
In today’s interconnected world, communication is key, and what better way to enhance your application’s communication capabilities than by integrating Twilio with the Ballerina programming language? Ballerina, known for its simplicity and power in building cloud-native integrations, combines with Twilio’s versatile communication APIs to help you send SMS, make voice calls, send WhatsApp messages, and more. In this blog, we’ll explore how the ballerinax/twilio package can empower you to build robust communication features effortlessly. Prerequisites Install Ballerina Swan Lake and Ballerina VS Code plugin. Create a Twilio account. Obtain Twilio phone number. Obtain Twilio Auth Tokens. Obtain the Twilio WhatsApp number from the Console’s WhatsApp Sandbox. Sample 1: Send/Receive Calls and Messages With Ballerina Create a new Ballerina package using the command below. bal new twilio-samples This creates a new Ballerina package in the default module with the Ballerina.toml file, which identifies a directory as a package and a sample source file (i.e., main.bal) with a main function. To provide the configurations required, create a new file named Config.toml and add the send/receive phone numbers, SID, and auth token received from Twilio. The file structure within the package will look like below. Ballerina package structure Add the following code to main.bal file. Go import ballerina/log; import ballerinax/twilio; configurable string accountSId = ?; configurable string authToken = ?; configurable string fromNumber = ?; configurable string fromWhatsAppNumber = ?; configurable string toNumber = ?; configurable string message = "This is a test message from Ballerina"; //Create Twilio client final twilio:Client twilio = check new ({twilioAuth: {accountSId, authToken}); public function main() returns error? { //Send SMS twilio:SmsResponse smsResponse = check twilio->sendSms(fromNumber, toNumber, message); log:printInfo(string `SMS Response: ${smsResponse.toString()}`); //Get the details of SMS sent above twilio:MessageResourceResponse details = check twilio->getMessage(smsResponse.sid); log:printInfo("Message Detail: " + details.toString()); //Make a voice call twilio:VoiceCallResponse voiceResponse = check twilio->makeVoiceCall(fromNumber, toNumber, { userInput: message, userInputType: twilio:MESSAGE_IN_TEXT }); log:printInfo(string `Voice Call Response: ${voiceResponse.toString()}`); //Send whatsapp message twilio:WhatsAppResponse whatsappResponse = check twilio->sendWhatsAppMessage(fromWhatsAppNumber, toNumber, message); log:printInfo(string `WhatsApp Response: ${whatsappResponse.toString()}`); // Get Account Details twilio:Account accountDetails = check twilio->getAccountDetails(); log:printInfo(string `Account Details: ${accountDetails.toString()}`); } Add the configuration values to the Config.toml file. It will look like below. accountSId="xxxxxxxxxxxxxxxxxxxxxxx" authToken="xxxxxxxxxxxxxxxxxxxxxxx" fromNumber="+1xxxxxxxxxx" fromWhatsAppNumber="+1xxxxxxxxxx" toNumber="+1xxxxxxxxxx" Then, run the program using bal run command, and you will see the following logs. C++ time = 2023-08-29T16:54:47.536-05:00 level = INFO module = anupama/twilio_samples message = "SMS Response: {\"sid\":\"SM12099885cce2c78bf5f50903ca83d3ac\",\"dateCreated\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"dateUpdated\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"dateSent\":\"\",\"accountSid\":\"xxxxxxxxxxxxx\",\"toNumber\":\"+1xxxxxxxxxx\",\"fromNumber\":\"+1xxxxxxxxxx\",\"body\":\"Sent from your Twilio trial account - This is a test message from Ballerina\",\"status\":\"queued\",\"direction\":\"outbound-api\",\"apiVersion\":\"2010-04-01\",\"price\":\"\",\"priceUnit\":\"USD\",\"uri\":\"/2010-04-01/Accounts/xxxxxxxxxxxxx/Messages/SM12099885cce2c78bf5f50903ca83d3ac.json\",\"numSegments\":\"1\"}" time = 2023-08-29T16:54:47.694-05:00 level = INFO module = anupama/twilio_samples message = "Message Detail: {\"body\":\"Sent from your Twilio trial account - This is a test message from Ballerina\",\"numSegments\":\"1\",\"direction\":\"outbound-api\",\"fromNumber\":\"outbound-api\",\"toNumber\":\"+1xxxxxxxxxx\",\"dateUpdated\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"price\":\"\",\"errorMessage\":\"\",\"uri\":\"/2010-04-01/Accounts/xxxxxxxxxxxxx/Messages/SM12099885cce2c78bf5f50903ca83d3ac.json\",\"accountSid\":\"xxxxxxxxxxxxx\",\"numMedia\":\"0\",\"status\":\"sent\",\"messagingServiceSid\":\"\",\"sid\":\"SM12099885cce2c78bf5f50903ca83d3ac\",\"dateSent\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"dateCreated\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"errorCode\":\"\",\"priceUnit\":\"USD\",\"apiVersion\":\"2010-04-01\",\"subresourceUris\":\"{\"media\":\"/2010-04-01/Accounts/xxxxxxxxxxxxx/Messages/SM12099885cce2c78bf5f50903ca83d3ac/Media.json\",\"feedback\":\"/2010-04-01/Accounts/xxxxxxxxxxxxx/Messages/SM12099885cce2c78bf5f50903ca83d3ac/Feedback.json\"}\"}" time = 2023-08-29T16:54:47.828-05:00 level = INFO module = anupama/twilio_samples message = "Voice Call Response: {\"sid\":\"CAaa2e5a5c7591928f7e28c79da97e615a\",\"status\":\"queued\",\"price\":\"\",\"priceUnit\":\"USD\"}" time = 2023-08-29T16:54:47.993-05:00 level = INFO module = anupama/twilio_samples message = "WhatsApp Response: {\"sid\":\"SM3c272753409bd4814a60c7fd06d97232\",\"dateCreated\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"dateUpdated\":\"Tue, 29 Aug 2023 21:54:47 +0000\",\"dateSent\":\"\",\"accountSid\":\"xxxxxxxxxxxxx\",\"toNumber\":\"whatsapp:+1xxxxxxxxxx\",\"fromNumber\":\"whatsapp:+1xxxxxxxxxx\",\"messageServiceSid\":\"\",\"body\":\"This is a test message from Ballerina\",\"status\":\"queued\",\"numSegments\":\"1\",\"numMedia\":\"0\",\"direction\":\"outbound-api\",\"apiVersion\":\"2010-04-01\",\"price\":\"\",\"priceUnit\":\"\",\"errorCode\":\"\",\"errorMessage\":\"\",\"uri\":\"\",\"subresourceUris\":\"{\"media\":\"/2010-04-01/Accounts/xxxxxxxxxxxxx/Messages/SM3c272753409bd4814a60c7fd06d97232/Media.json\"}\"}" time = 2023-08-29T16:54:48.076-05:00 level = INFO module = anupama/twilio_samples message = "Account Details: {\"sid\":\"xxxxxxxxxxxxx\",\"name\":\"My first Twilio account\",\"status\":\"active\",\"type\":\"Trial\",\"createdDate\":\"Fri, 18 Aug 2023 21:14:20 +0000\",\"updatedDate\":\"Fri, 18 Aug 2023 21:14:54 +0000\"}" Also, you will get an SMS, a voice call, and a WhatsApp message to the specified number. Are you interested in seeing this sample's sequence diagram generated by the Ballerina VS Code plugin? You can see the interactions with Twilio clearly in this diagram without reading the code. The Sequence diagram can capture how the logic of your program flows, how the concurrent execution flow works, which remote endpoints are involved, and how those endpoints interact with the different workers in the program. See the Sequence diagram view for more details. Sequence diagram view of the sample Sample 2: Use TwiML for Programmable Messaging With Ballerina The TwiML (Twilio Markup Language) is a set of instructions you can use to tell Twilio what to do when you receive an incoming call, SMS, MMS, or WhatsApp message. Let’s see how to write a Ballerina program that makes a voice call with the instructions of the given TwiML URL. Here, we need to have a URL that returns TwiML Voice instructions to make the call. If you don’t have such a URL that returns TwiML, you can write a simple Ballerina HTTP service to return it as follows and run it. The instruction returned by this service will contain a bell sound, which is looped ten times. Go import ballerina/http; service /twilio on new http:Listener(9090) { resource function post voice() returns xml { xml response = xml `<?xml version="1.0" encoding="UTF-8"?> <Response> <Play loop="10">https://api.twilio.com/cowbell.mp3</Play> </Response>`; return response; } } If you are running the above service locally, you need to expose it to external so that it can be accessed by Twilio when making the call. You can use ngrok for that, which is a cross-platform application that enables developers to expose a local development server to the Internet with minimal effort. Expose the above service with the following ngrok command. ./ngrok http 9090 This will return a URL similar to the following: https://a624-2600-1700-1bd0-1390-9587-3a61-a470-879b.ngrok.io Then, you can append it with the service path and resource path in your above Ballerina service to be used with the Twilio voice call example. https://a624-2600-1700-1bd0-1390-9587-3a61-a470-879b.ngrok.io/twilio/voice Next, write your Ballerina code as follows. Use the above complete ngrok URL as the voiceTwimUrl configurable. Go import ballerina/log; import ballerinax/twilio; configurable string accountSId = ?; configurable string authToken = ?; configurable string fromNumber = ?; configurable string toNumber = ?; configurable string voiceTwimUrl = ?; //Create Twilio client final twilio:Client twilio = check new ({twilioAuth: {accountSId, authToken}); public function main() returns error? { //Make a voice call twilio:VoiceCallResponse voiceResponse = check twilio->makeVoiceCall(fromNumber, toNumber, { userInput: voiceTwimUrl, userInputType: twilio:TWIML_URL }); log:printInfo(string `Voice Call Response: ${voiceResponse.toString()}`); } When running the program, you will receive a phone call to the specified number with a Bell sound of 10 times. You can see the below logs in your Ballerina application. C++ time = 2023-08-29T17:03:13.804-05:00 level = INFO module = anupama/twilio_samples message = "Voice Call Response: {\"sid\":\"CA3d8f5cd381a4eaae1028728f00770f00\",\"status\":\"queued\",\"price\":\"\",\"priceUnit\":\"USD\"}" In conclusion, Twilio’s synergy with Ballerina through the ballerinax/twilio package presents a powerful tool for elevating your application’s communication prowess. The showcased sample code highlights its ease and adaptability, setting it apart from connectors in other languages. Hope you’ve enjoyed making calls and sending messages using this seamless integration.
Javin Paul
Lead Developer,
infotech
Reza Rahman
Principal Program Manager, Java on Azure,
Microsoft
Kai Wähner
Technology Evangelist,
Confluent
Alvin Lee
Founder,
Out of the Box Development, LLC