This site is from a past semester! The current version will be here when the new semester starts.
CS2113/T 2020 Jan-Apr
  • Full Timeline
  • Week 1 [from Mon Jan 13]
  • Week 2 [from Wed Jan 15 noon]
  • Week 3 [from Wed Jan 22 noon]
  • Week 4 [from Wed Jan 29 noon]
  • Week 5 [from Wed Feb 5 noon]
  • Week 6 [from Wed Feb 12 noon]
  • Week 7 [from Wed Feb 19 noon]
  • Week 8 [from Wed Mar 4 noon]
  • Week 9 [from Wed Mar 11 noon]
  • Week 10 [from Wed Mar 18 noon]
  • Week 11 [from Wed Mar 25 noon]
  • Week 12 [from Wed Apr 1 noon]
  • Week 13 [from Wed Apr 8 noon]
  • Textbook
  • Admin Info
  • Report Bugs
  • Forum
  • Instructors
  • Announcements
  • File Submissions
  • Tutorial Schedule
  • repl.it link
  • Java Coding Standard
  • Forum Activities Dashboard
  • Participation Dashboard

  •  Individual Project (iP):
  • Individual Project Info
  • Duke Upstream Repo
  • iP Code Dashboard
  • iP Progress Dashboard

  •  Team Project (tP):
  • Team Project Info
  • Team List
  • tP Code Dashboard
  • tP Progress Dashboard
  • Participation Marks Apdx A: Module Principles


    Grade Breakdown

    To receive full 5 marks allocated for participation, meet the criteria A, B, and C.

    A Earned at least half of weekly participation points in at least 10 weeks.

    • Programming exercise (if any):
      • Submitted correct solutions at least 75% of the exercises: 3 points
      • Submitted correct solutions to 50-74% of the exercises: 2 points
      • Submitted correct solutions to 25-49% of the exercises: 1 point
    • Post-lecture quiz (if any):
      • Answered at least 75% of the questions correctly: 3 points
      • Answered 50-74% questions correctly: 2 points
      • Answered less than 50% questions correctly but answered more than 50% of the questions: 1 point
    • Compulsory administrative tasks e.g., submitting peer evaluations: 2 for each task

    Lecture in week N:

    • In-lecture quiz and other activities are counted for week N lecture participation.
    • Post-lecture quiz (if any) is counted for Week N+1

    B Received good peer evaluations

    Q The team members' conduct in the project and during tutorials,

    • Evaluated based on the following criteria, on a scale Poor/Below Average/Average/Good/Excellent:

    Peer Evaluation Criteria: Professional Conduct

    • Professional Communication :
      • Communicates sufficiently and professionally. e.g. Does not use offensive language or excessive slang in project communications.
      • Responds to communication from team members in a timely manner (e.g. within 24 hours).
    • Punctuality: Does not cause others to waste time or slow down project progress by frequent tardiness.
    • Dependability: Promises what can be done, and delivers what was promised.
    • Effort: Puts in sufficient effort to, and tries their best to keep up with the module/project pace. Seeks help from others when necessary.
    • Quality: Does not deliver work products that seem to be below the student's competence level i.e. tries their best to make the work product as high quality as possible within her competency level.
    • Meticulousness:
      • Rarely overlooks submission requirements.
      • Rarely misses compulsory module activities such as pre-module survey.
    • Teamwork: How willing are you to act as part of a team, contribute to team-level tasks, adhere to team decisions, etc. Honors all collectively agreed-upon commitments e.g., weekly project meetings.

    Q The competency of the team member demonstrated in the project and during the tutorials,

    • Considered only for bonus marks, A+ grades, and tutor recruitment
    • Evaluated based on the following criteria, on a scale Poor/Below Average/Average/Good/Excellent:

    Peer Evaluation Criteria: Competency

    • Technical Competency: Able to gain competency in all the required tools and techniques.
    • Mentoring skills: Helps others when possible. Able to mentor others well.
    • Communication skills: Able to communicate (written and spoken) well. Takes initiative in discussions.

    Q [Optional] Any ANONYMOUS feedback you want to give the classmates you reviewed above?

    Q [Optional] Any CONFIDENTIAL comments about any team members?

    • -1 for each professional conduct criterion in which you score below average (based on the average of ratings received).
    • No penalty for scoring low on competency criteria.

    C Tutorial attendance/participation not too low

    Low attendance/participation can affect participation marks directly (i.e., attended fewer than 7) or indirectly (i.e., it might result in low peer evaluation ratings).

    In addition, you can receive bonus marks in the following ways. Bonus marks can be used to top up your participation marks but only if your marks from the above falls below 5.

    • [For lecture participation] Received at least half of the points for lecture activities in at least 10 lectures: 1 mark
      • In-lecture quizzes (using pollev.com): scoring is similar to the post-lecture quizzes
      • Other in-lecture activities: scoring is case-by-case basis
    • [For perfect peer ratings] Received good ratings for all 10 peer evaluations criteria: 1 mark
    • [For helping classmates] Was very helpful to classmates e.g., multiple helpful posts in forum: 1 mark

    Examples:

    • Alicia earned 1/2, 3/5, 2/5, 5/5, 5/5, 5/5, 5/5, 5/5, 5/5, 5/5, 4/5, 5/5 in the first 12 weeks. As she received at least half of the points in 11 of the weeks, she gets 5 participation marks. Bonus marks are not applicable as she has full marks already.
    • Benjamin managed to get at least half of the participation points in 9 weeks only, which gives him 5-1 = 4 participation marks. But he participated in 10 lectures, and hence get a bonus mark to make it 5/5.
    • Chun Ming met the participation points bar in 8 weeks only, giving him 5-2 = 3 marks. He lost 2 more marks because he received multiple negative ratings for two criteria, giving him 1/5 participation marks.

    Your participation progress can be tracked in this page from week 3 onward. The page will be updated about once a week.

    Final Exam

    • The final exam will be on Saturday May 2nd, 1-3pm (as per normal exam schedule), and will count for 20% of the final grade.
    • The exam will be done online. Tools used: LumiNUS, Zoom, Microsoft Teams.

    Early preparations

    • Set up Zoom: Follow the Zoom exam instructions provided by NUS (note the requirement for SSO login).
      • Update the name of your Zoom profile to match the exact name on your student card, including whether the family name is give first or last (reason: we need to manually locate your name in the attendance list, which is sorted by your name in LumiNUS). You will not be admitted to the Zoom meeting if you do not comply with the requirement.
      • Note the requirements for camera placement: Your Zoom video feed need to capture your upper body and the work area, including the computer screen (as explained in this example). That means the built-in camera of your computer is not suitable for this purpose. Use either a separate web cam (attached to your Computer) or use Zoom through your smart phone. If neither of those options are available to you, let the prof know ASAP (deadline: April 24).

    You will not be allowed into the Zoom meeting unless your Zoom name complies with the above requirement.
    Your exam submission will not be accepted if your Zoom video feed does not comply with the above requirement.

    As per NUS requirements, your Zoom video feed is recorded by us and will be shared with NUS administration. If you do not consent to that, please let us know ASAP (deadline: April 24) so that we can put you in touch with the NUS administration to sort it out.

    15 30 minutes before the exam

    • Aim to join the Zoom waiting room around 12.30pm.
    • Ensure your computer and the phone (if applicable) are charged and within reach of a power supply.
    • In your computer, close all other apps and browser tabs other than the ones permitted to be use for the exam. Remember to exit other background apps that starts automatically such as telegram.
    • Launch MS-Teams, login to it, and close the app without exiting (which makes it run in the background).
    • Login to LumiNUS. The password to open the quiz will be broadcast via Zoom (audio announcement plus chat message) by the invigilator later.
    • If you are using Zoom from your phone, remember to disable the auto-sleep/lock feature so that the phone doesn't go to sleep in the middle of the exam.
    • Launch Zoom, login, ensure profile name and camera complies with our requirements, and join the meeting. Wait for the invigilator to take you into the meeting.
    • Do not use more than one Zoom device at a time.
      • If it is inconvenient to use the Zoom device for PM'ing the invigilator (e.g., if you are using Zoom through a smart phone mounted on a stand), use MS-Teams (via your computer) to PM the invigilator.
      • Exception: if you use a phone as your primary Zoom device but it not convenient to use it for audio (e.g., its speaker volume not high enough),
        1. Get our permission to use the PC as a second Zoom device.
        2. Install Zoom in the PC.
        3. Use the PC for audio/chat only (i.e., switch off the video) and the phone for video only.
        4. Name your Zoom profile name in the PC as [PC] Your Name as in Student Card e.g., [PC] John Doe
    • Once you are inside the Zoom meeting, wait patiently until all students have been admitted and the exam is ready to start. This wait could be as long as 30-45 minutes.
    • Note that the quiz will not appear on LumiNUS until a few minutes before we release the password.

    During the exam

    • Keep both the video and speaker switched-on (but microphone muted) in your Zoom during the entire exam.
    • Do not use MS-Teams during the exam unless the invigilator uses to contact you, you need to contact the invigilator, or you need to contact CIT for tech help.
    • Stay in the video frame for the entire exam duration, except during the toilet break.
    • Do not to use virtual backgrounds in Zoom. Do not use pre-recorded videos as your video feed.
    • As the exam is open-book, you can refer to any printed/written materials or use the computer to read any PDF/word documents that are in your computer. You may also access the online version of the textbook in the module website but strongly encouraged to use the PDF or printed version of the textbook instead.
      Do not visit any other websites, use any other apps, or online search engines during the exam.
    • Remain in the video frame until the invigilator ends the exam, even after you have submitted the quiz. Do not use the computer or other communication devices (not even Kindle) during that period (suggestion: have something in hard-copy form to ready to read during this time e.g., printed lecture notes of another module).
    • Do not communicate with any person other than the invigilator during the exam, as per normal exam rules.
    • When the invigilator asks you to do a face check, turn your face towards the camera, move closer to the camera, remove face mask (if any), and hold the pose until the invigilator tells you to go back to your working position.

    Troubleshooting exam problems

    • If you have a doubt/query about a question, or want to make an assumption about a question, please write it down in the 'justification' text box. Do not try to communicate those with the invigilator during the exam. We'll take your doubt/query/assumption into account when grading. For example, if many had queries about a specific question, we can conclude that the question is unclear and omit it from grading.
    • If you encounter a serious problem that prevents you from proceeding with the exam (e.g., the password to open the quiz doesn't work), contact the invigilator using Zoom chat or MS-Teams.
    • If your computer crashed/restarted during the exam, try to get it up again and resume the exam. LumiNUS will allow you to resume from where you stopped earlier. However, note that there is a deadline to finish the quiz and you will overrun that deadline if you lose more than 5 minutes due to the computer outage.

    Format

    • The exam will be given as quizzes in LumiNUS.
      For the ease of administration, the exam is split into two equal size quizzes: part 1, part 2.
    • Each quiz,
      • consists of 15 MCQ questions (plus one easy bonus question and a dummy question to collect comments)
      • is to be done in 30+5 minutes (if you spend around 2 minutes per question, you'll have a 5 minutes buffer)
    • All 30 normal questions have the same the weight and each question is expected to take a similar amount of time.
    • The questions will be presented in random order.
    • You are not allowed to go back to previous questions.
    • You are required to give a justification for your answer. The question will specify what should be included in the justification.
      Answers without the correct justification will not earn full marks. However, we'll give full marks up to two correct answers (per 15 questions) that do not have justifications (to cater for cases where you accidentally proceeded to the next question before adding the justification).
    • Here is an example question. The answer is a and the justification can be OOP is only one of the choices for an SE project.

    Choose the incorrect statement.

    [Justification: why is it incorrect?]

    • Almost all questions will ask you to choose the INCORRECT statement and justify why it is incorrect.

    Implementation [10 marks]: To get full marks, you should achieve,

    • Achieve more than 90% of all deliverables by the end.
    • Requirements marked as optional or if-applicable are not counted when calculating the percentage of deliverables.
    • When a requirement specifies a minimal version of it, simply reaching that minimal version of the requirement is enough for it to be counted for grading -- however, we recommend you to go beyond the minimal; the farther you go, the more practice you will get.

    Project Management [5 marks]: To get full marks, you should achieve,

    • Submit some deliverables in at least 4 out of the 6 iP weeks (i.e., week 2- week 7)
    • Follow other requirements specified (e.g., how to use Git/Github for each increment, do peer reviews) in at least 4 weeks

    Documentation [5 marks]: To get full marks, you should achieve,

    • The product web site and the user guide is reasonable

    You can monitor your iP progress (as detected by our scripts) in the iP Progress Dashboard page.

    Note that project grading is not competitive (not bell curved). CS2113T projects will be assessed separately from CS2113 projects. Given below is the marking scheme.

    Total: 35 55 marks ( 25 45 individual marks + 10 team marks)

    See the sections below for details of how we assess each aspect.

    1. Project Grading: Product Design [/ 5 marks]

    Evaluates: how well your features fit together to form a cohesive product (not how many features or how big the features are) and how well does it match the target user

    Evaluated by:

    • tutors (based on product demo and user guide)
    • peers from other teams (based on peer testing and user guide)

    Q Quality of the product design,
    Evaluate based on the User Guide and the actual product behavior.

    Criterion Unable to judge Low Medium High
    target user not specified clearly specified and narrowed down appropriately
    value proposition not specified The value to target user is low. App is not worth using Some small group of target users might find the app worth using Most of the target users are likely to find the app worth using
    optimized for target user Not enough focus for CLI users Mostly CLI-based, but cumbersome to use most of the time feels like a fast typist can be more productive with the app, compared to an equivalent GUI app without a CLI

    In addition, feature flaws reported in the PE will be considered when grading this aspect.

    These are considered feature flaws:
    The feature does not solve the stated problem of the intended user i.e., the feature is 'incomplete'
    Hard-to-test features
    Features that don't fit well with the product
    Features that are not optimized enough for fast-typists or target users

    Note that 'product design' or 'functionality' are not critical learning outcomes of the tP. Therefore, the bar you need to reach to get full 5 marks will be quite low. For example, the Medium level in the rubric given in the panel above should be enough to achieve full marks. Similarly, only cases of excessive 'feature flaw' bugs will affect the score.

    2. Project Grading: Implementation [ 10 20 marks]

    2A. Code quality

    Evaluates: the quality of the parts of the code you claim as written by you

    Evaluation method: manual inspection by tutors + automated-analysis by a script

    Criteria:

    • At least some evidence of these (see here for more info)

      • logging
      • exceptions
      • assertions
    • No coding standard violations e.g. all boolean variables/methods sounds like booleans.

    • SLAP is applied at a reasonable level. Long methods or deeply-nested code are symptoms of low-SLAP.

    • No noticeable code duplications i.e. if there multiple blocks of code that vary only in minor ways, try to extract out similarities into one place, especially in test code.

    • Evidence of applying code quality guidelines covered in the module.

    2B. Effort

    Evaluates: how much value you contributed to the product

    Method:

    • This is evaluated by peers who tested your product, and tutors.

    Q [For each member] The functional code contributed by the person is,
    Consider implementation work only (i.e., exclude testing, documentation, project management etc.)
    The typical iP refers to an iP where all the requirements are met at the minimal expectations given.
    Use the person's PPP and RepoSense page to evaluate the effort.

    • The score could be further moderated by this question answered by team members.

    Q The team members' contribution to the product implementation (excluding UG, DG, and team-based tasks) is,

    3. Project Grading: QA [ 10 15 marks]

    3A. Developer Testing:

    Evaluates: How well you tested your own feature

    Based on:

    1. functionality bugs in your work found by others during the Practical Exam (PE)
    2. your test code (note our expectations for automated testing)
     
    • Expectation Write some automated tests so that we can evaluate your ability to write tests.

    🤔 How much testings is enough? We expect you to decide. You learned different types of testing and what they try to achieve. Based on that, you should decide how much of each type is required. Similarly, you can decide to what extent you want to automate tests, depending on the benefits and the effort required.
    There is no requirement for a minimum coverage level. Note that in a production environment you are often required to have at least 90% of the code covered by tests. In this project, it can be less. The weaker your tests are, the higher the risk of bugs, which will cost marks if not fixed before the final submission.

    These are considered functionality bugs:
    Behavior differs from the User Guide
    A legitimate user behavior is not handled e.g. incorrect commands, extra parameters
    Behavior is not specified and differs from normal expectations e.g. error message does not match the error

    3B. System/Acceptance Testing:

    Evaluates: How well you can system-test/acceptance-test a product

    Based on: bugs you found in the PE. In addition to functionality bugs, you get credit for reporting documentation bugs and feature flaws.

    Grading bugs found in the PE
    • Of Developer Testing component, based on the bugs found in your code3A and System/Acceptance Testing component, based on the bugs found in others' code3B above, the one you do better will be given a 70% weight and the other a 30% weight so that your total score is driven by your strengths rather than weaknesses.
    • Bugs rejected by the dev team, if the rejection is approved by the teaching team, will not affect marks of the tester or the developer.
    • The penalty/credit for a bug varies based on,
      • The severity of the bug: severity.High > severity.Medium > severity.Low > severity.VeryLow
      • The type of the bug: type.FunctionalityBug > type.DocumentationBug > type.FeatureFlaw
    • The penalty for a bug is divided equally among assignees.
    • Developers are not penalized for duplicate bug reports they received but the testers earn credit for duplicate bug reports they submitted as long as the duplicates are not submitted by the same tester.
    • i.e., the same bug reported by many testersObvious bugs earn less credit for the tester and slightly more penalty for the developer.
    • If the team you tested has a low bug count i.e., total bugs found by all testers is low, we will fall back on other means (e.g., performance in PE dry run) to calculate your marks for system/acceptance testing.
    • Your marks for developer testing depends on the bug density rather than total bug count. Here's an example:
      • n bugs found in your feature; it is a difficult feature consisting of lot of code → 4/5 marks
      • n bugs found in your feature; it is a small feature with a small amount of code → 1/5 marks
    • You don't need to find all bugs in the product to get full marks. For example, finding half of the bugs of that product or 4 bugs, whichever the lower, could earn you full marks.
    • Excessive incorrect downgrading/rejecting/marking as duplicatesduplicate-flagging, if deemed an attempt to game the system, will be penalized.

    4. Project Grading: Documentation [ 5 10 marks]

    Evaluates: your contribution to project documents

    Method: Evaluated in two steps.

    • Step 1: Evaluate the whole UG and DG. This is evaluated by peers who tested your product, and tutors.

    Q Compared to AddressBoook-Level3 (AB3), the overall quality of the UG you evaluated is,
    Evaluate based on fit-for-purpose, from the perspective of a target user. For reference, the AB3 UG is here.

    Q Compared to AB3, the overall quality of the DG you evaluated is,
    Evaluate based on fit-for-purpose from the perspective of a new team member trying to understand the product's internal design by reading the DG. For reference, the AB3 DG is here.

    • Step 2: Evaluate how much of that effort can be attributed to you. This is evaluated by team members, and tutors.

    Q The team members' contribution to the User Guide is,

    Q The team members' contribution to the Developer Guide is,

    • In addition, UG and DG bugs you received in the PE will be considered for grading this component.

    These are considered UG bugs (if they hinder the reader):

    Use of visuals

    • Not enough visuals e.g., screenshots/diagrams
    • The visuals are not well integrated to the explanation
    • The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes

    Use of examples:

    • Not enough or too many examples e.g., sample inputs/outputs

    Explanations:

    • The explanation is too brief or unnecessarily long.
    • The information is hard to understand for the target audience. e.g., using terms the reader might not know

    Neatness/Correctness:

    • looks messy
    • not well-formatted
    • broken links, other inaccuracies, typos, etc.

    These are considered DG bugs (if they hinder the reader):

    These are considered UG bugs (if they hinder the reader):

    Use of visuals

    • Not enough visuals e.g., screenshots/diagrams
    • The visuals are not well integrated to the explanation
    • The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes

    Use of examples:

    • Not enough or too many examples e.g., sample inputs/outputs

    Explanations:

    • The explanation is too brief or unnecessarily long.
    • The information is hard to understand for the target audience. e.g., using terms the reader might not know

    Neatness/Correctness:

    • looks messy
    • not well-formatted
    • broken links, other inaccuracies, typos, etc.

    UML diagrams:

    • Notation incorrect or not compliant with the notation covered in the module.
    • Some other type of diagram used when a UML diagram would have worked just as well.
    • The diagram used is not suitable for the purpose it is used.
    • The diagram is too complicated.

    Code snippets:

    • Excessive use of code e.g., a large chunk of code is cited when a smaller extract of would have sufficed.

    Problems in User Stories. Examples:

    • Incorrect format
    • All three parts are not present
    • Benefit does not match the function
    • Important user stories missing

    Problems in NFRs. Examples:

    • Not really a Non-Functional Requirement
    • Not well-defined (i.e., hard to decide when it has been met)
    • Not reasonably achievable
    • Highly relevant NFRs missing

    Problems in Glossary. Examples:

    • Unnecessary terms included
    • Important terms missing

    5. Project Grading: Project Management [/ = 5 marks]

    5A. Process:

    Evaluates: How well you did in project management related aspects of the project, as an individual and as a team

    Based on: tutor/bot observations of project milestones and GitHub data

    Grading criteria:

    • No e.g., the product is not working at all by the milestone deadlinemajor mishaps at v1.0 and v2.0.
    • Good attempt to use of at least some Git and GitHub features (e.g., milestones, releases, issue tracker, PRs)
    • Project done iteratively and incrementally (opposite: doing most of the work in one big burst)

    5B. Team-tasks:

    Evaluates: How much you contributed to team-tasks

    Here is a non-exhaustive list of team-tasks:

    1. Necessary general code enhancements
    2. Setting up tools e.g., GitHub, Gradle
    3. Maintaining the issue tracker
    4. Release management
    5. Updating user/developer docs that are not specific to a feature e.g. documenting the target user profile
    6. Incorporating more useful tools/libraries/frameworks into the product or the project workflow (e.g. automate more aspects of the project workflow using a GitHub plugin)

    Based on: peer evaluations, tutor observations

    Grading criteria: To earn full marks,

    • you have done close to a fair share of the team tasks. You can earn bonus marks by doing more than your fair share.
    • you have merged code in at least four of weeks 7, 8, 9, 10, 11, 12

    Participation Marks Apdx A: Module Principles