Serverless Framework – Build web, mobile and IoT applications with serverless architectures using AWS Lambda, Azure Functions, Google CloudFunctions & more! – - serverless/serverless
I can't seem to get Ref
or Fn:GetAtt
to return a valid value for use with setting up a resource.
serverless.yml
...etc...
functions:
bearerTokenAuthentication:
handler: app.bearerTokenAuthentication
name: ${self:service}-auth-bearer
resources:
- ${file(./serverless_resources.yml)}
serverless_resources.yml
Resources:
ApiGateway:
Type: AWS::ApiGateway::RestApi
Properties:
Name: restapi-${self:provider.stage}
Description: Endpoints
ApiKeySourceType: HEADER # (to read the API key from the X-API-Key header of a request)
ApiGatewayBearerAuthorizer:
Type: AWS::ApiGateway::Authorizer
Properties:
Type: token
IdentitySource: method.request.header.Authorization
Name: BearerAuthorization
AuthorizerResultTtlInSeconds: 300
AuthorizerUri: !Join #arn:aws:apigateway:${self:provider.region}:lambda:path/${self:functions.bearerTokenAuthentication.name}
- ''
- - 'arn:aws:apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/'
- !GetAtt
- bearerTokenAuthentication # also tried !Ref bearerTokenAuthentication and '${self:functions.bearerTokenAuthentication.name}'
- Arn
- /invocations
RestApiId: !Ref ApiGateway
No matter what I do, GetAtt
cannot find the ARN for the Lambda function declared in bearerTokenAuthentication
. I just keep getting this error:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource bearerTokenAuthentication
... or if trying Ref
...
Error: The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [bearerTokenAuthentication] in the Resources block of the template
Is it possible to reference Lambda ARNs from the resource section? It seems by the error messages it is looking for "resource" names. I always thought the lambda function declaration was also considered a resource (besides the obvious Resources:
block of course), perhaps I am misunderstanding something.
Source: (StackOverflow)
I'm trying to create an S3 Bucket
and a corresponding Resource Policy
in the same serverless.yml
so that both are established on the new stack formation.
However, I am running into an error on build:
Unresolved resource dependencies [CUSTOM-BUCKETNAME] in the Resources block of the template
Is there to a way to synchronously create the policy so that it waits for the bucket to be created first? I'm setting this up in the resources
section of my yml
resources:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: CUSTOM-BUCKETNAME
BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: CUSTOM-BUCKETNAME
PolicyDocument:
Statement:
- Principal:
Service: "ses.amazonaws.com"
Action:
- s3:PutObject
Effect: Allow
Sid: "AllowSESPuts"
Resource:
Fn::Join: ['', ['arn:aws:s3:::', Ref: "CUSTOM-BUCKETNAME", '/*'] ]
Above is a small snippet of my yml configuration.
After using DependsOn, I'm still getting the same error. Worth note, the resource dependency refers to the dynamic name (CUSTOM-BUCKETNAME) of the bucket.
resources:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: CUSTOM-BUCKETNAME
BucketPolicy:
Type: AWS::S3::BucketPolicy
DependsOn: Bucket
Properties:
Bucket:
Ref: CUSTOM-BUCKETNAME
PolicyDocument:
Statement:
- Principal:
Service: "ses.amazonaws.com"
Action:
- s3:PutObject
Effect: Allow
Sid: "AllowSESPuts"
Resource:
Fn::Join: ['', ['arn:aws:s3:::', Ref: "CUSTOM-BUCKETNAME", '/*'] ]
CUSTOM-BUCKETNAME is never explicity hardcoded in the yml itself, its a dynamically generated name using template literals.
Source: (StackOverflow)
I'm not able to correctly handle CORS issues when doing either PATCH
/POST
/PUT
requests from the browser sending an Authorization
header with a Bearer token
(this works correctly outside of the browser and for GET
requests) in Zeit Now serverless.
I'm using Auth0 for the authorization side if that helps.
This is my now.json
headers section, I've tried a lot of combinations for these, but neither succeeded from the browser.

- I tried using npm
cors
package without success
- Tried to add
routes
in now.json
- Tried setting headers at the top of the serverless function using
res.addHeader()
- Also tried handling
OPTIONS
request manually doing variations of this:

Finally, this is the error that I get
Access to XMLHttpRequest at 'https://api.example.org/api/users' from origin 'https://example.org' has been blocked by CORS policy:
Response to preflight request doesn't pass access control check: It does not have HTTP ok status.
Not sure what I'm wrong or how to handle this properly.
Source: (StackOverflow)
We have a typical express app implemented (it's mostly a rest api, but also has some other logic like passport-js login flows and session management in it). Now migrating it to serverless with AWS Lambdas. The original express app uses express-session package to maintain session info with postgres as the store. App is deployed to AWS now - first request works, then second request fails with Internal Server Error. We have a hard time figuring out why. It's not a timeout. We tried setting up express-session with postgres, DynamoDB and in-memory - and still could not make it work. Why is that so? What is the recommended solution for session management when it comes to a serverless app?
Source: (StackOverflow)
Trying to index some data in elasticsearch using AWS Lambda.
Stack Trace
TypeError [ERR_INVALID_ARG_TYPE]: The "key" argument must be one of
type string, TypedArray, or DataView
at new Hmac (internal/crypto/hash.js:84:11)
at Object.createHmac (crypto.js:122:10)
at Object.hmac (/home/projects/serverless-todo-app/.webpack/service/src/indexer/createIndex.js:698:30)
at Object.getSigningKey (/home/projects/serverless-todo-app/.webpack/service/src/indexer/createIndex.js:7109:8)
at V4.signature (/home/projects/serverless-app/.webpack/service/src/indexer/createIndex.js:12708:36)
at V4.authorization (/home/projects/serverless-app/.webpack/service/src/indexer/createIndex.js:12703:36)
at V4.addAuthorization (/home/projects/serverless-app/.webpack/service/src/indexer/createIndex.js:12645:12)
at ElasticsearchService.put (/home/projects/serverless-app/.webpack/service/src/indexer/createIndex.js:8150:12)
at process (/home/projects/serverless-app/.webpack/service/src/indexer/createIndex.js:8115:24)
at BbPromise (/usr/local/lib/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:567:30)
at AwsInvokeLocal.invokeLocalNodeJs (/usr/local/lib/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:521:12)
at AwsInvokeLocal.invokeLocal (/usr/local/lib/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:152:19)
From previous event:
at Object.invoke:local:invoke [as hook] (/usr/local/lib/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:34:10)
const credentials = new AWS.EnvironmentCredentials('AWS');
let signer = new AWS.Signers.V4(this.request, 'es');
signer.addAuthorization(credentials, new Date());
Trying to index some data in elastisearch using AWS Lambda.
Source: (StackOverflow)
From what I know about Azure Functions, and serverless computing in general, is that it provides the benefit of not needing to pay for a server that is constantly running. I.E. you only pay for the compute that was used.
In my case, I am already paying for servers to host a web application. Would it not make sense to use those same servers to host a backend API?
My guess is that the performance of the web application would take a hit, but aside from that, are there any other reasons why the function apps would make sense over a web API?
Source: (StackOverflow)
JAMStack People are using Netlify/Zeit (or AWS Lambda) functions to access their database. But there are cloud databases like Firestore which you can access securely and directly from your web/mobile apps as described here. Then why do not people embed the data access logic into their client app too? Why do they need extra layer of serverless functions?
Source: (StackOverflow)
My problem is that if I write the Lambda function in VSCode I cannot deploy it to AWS console.
I have an AWS account and provided credentials to use in VSCode. Just testing the deployment of simple Lambda function to AWS Console with serverless deploy
command. So far no success. It creates the bucket on S3 and put zip code there.
ConsoleTest function was created manually in AWS Lambda Console.

My serverless.yml looks like this:
service: myservice
provider:
name: aws
runtime: nodejs12.x
functions:
hello:
handler: handler.hello
events:
- http:
path: users/create
method: get
Result in terminal (I get correctly JSON response)

I was following the official guide: https://serverless.com/framework/docs/providers/aws/guide/deploying/
Any help, please?
Source: (StackOverflow)
I am currently trying to migrate my application to IBM cloud functions, but one issue that I am facing is keeping the Postgresql DB from beeing reconnected every time an action is invoked.
I have found very little information about how to reuse a DB connection in Go, and the solutions I have tried (keeping the database handler in a global variable) does not work.
Would anyone be able to point me to the right doc?
Thanks,
-Thomas
PS: Here is a snippet of code that illustrates the way I tried:
func Storage() Storager {
once.Do(func() {
db := InitDB()
println("Initiating DB...")
s = &storage{
db: db,
}
})
return s
}
// This is declared as a global variable in main
var s = storage.Storage()
Source: (StackOverflow)
I'm building a paid API service that makes use of AWS lambda, and want to set up a billing structure that forwards the cost of the lambda computation plus a small add on for the service. Is there a good way to do this without setting up a database to track usage?
Source: (StackOverflow)
Is Server-less a subset or attribute of Cloud Native? Or is it another way round -- Is Cloud Native a subset or attribute of Server-less?
Nathan Aw (Singapore)
Source: (StackOverflow)
I would like to control the rate at which clients can connect to my WebSocket API running in AWS API Gateway. The provider.usagePlan
config allows me to do this for private
HTTP endpoints, via an API key set in provider.apiKeys
. Is there an equivalent for WebSockets?
Source: (StackOverflow)
I'm a beginner in Serverless and dynamoDB. My use case consists of two tables Trips and Routes.
Trips table consists of these parameters {id, Route, Cost, Distance, Time}
. Routes table consists of these parameters {quantity, Rate, From, To }.
Cost param in the Trips table is calculated by quantity * Rate
params from the routes table. Every time a trip is created/edited I fetch the value from the table and store the new value as Cost
param of trips table.
The issue arises when someone changes the quantity
or rate
parameter in the Routes table, how do I propogate this change to Trips table? Currently I'm updating the Cost
Parameter everytime someone updates Routes, is there a more efficient way?
Source: (StackOverflow)
I'm building a static serverless web app using React and Next.js framework, and planning to host it in an S3 bucket. I have some APIs set up on API Gateway.
When building this app I discovered that dealing with dynamic routes has a lot of limitations. For example, although new dynamic routes (such as /products/431245) can be created using a feature of NextJS, apparently it's not supported on fully static hosts. Also to my knowledge there is no way to get rid of this route when, for example, the product with ID 431245 gets deleted from a database.
So my question is, are these limitations of the static serverless architecture or am I just using the wrong framework?
Source: (StackOverflow)
I am trying to upload a file to google cloud storage from within a cloud function. I can't import the cloud storage library into my function, though.
Can cloud storage be used from within cloud functions in this manner?
Cloud Function
from google.cloud import storage
def upload_blob(bucket_name, blob_text, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_string(blob_text)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
def log_data(request):
request_json = request.get_json()
BUCKET_NAME = 'my-bucket'
BLOB_NAME = 'test-blob'
BLOB_STR = '{"blob": "some json"}'
upload_blob(BUCKET_NAME, BLOB_STR, BLOB_NAME)
return f'Success!'
Error
Deployment failure:
Function load error: Code in file main.py can't be loaded.
File "/user_code/main.py", line 1, in <module>
from google.cloud import storage
ImportError: cannot import name 'storage' from 'google.cloud' (unknown location)
Source: (StackOverflow)